Search results
bitnet.cpp is the official inference framework for 1-bit LLMs (e.g., BitNet b1.58). It offers a suite of optimized kernels, that support fast and lossless inference of 1.58-bit models on CPU (with NPU and GPU support coming next).
BITNET. This repository not only provides PyTorch implementations for training and evaluating 1.58-bit neural networks but also includes a unique integration where the experiments conducted automatically update a LaTeX-generated paper.
28 lut 2024 · Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}.
26 mar 2024 · BitNet b1.58 addresses this by halving activation bits, enabling a doubled context length with the same resources, with potential further compression to 4 bits or lower for 1.58-bit LLMs, a...
27 lut 2024 · BitNet b1.58 represents a significant step towards native support for long sequences, as it reduces the activations from 16 bits to 8 bits, allowing the context length to be doubled given the same resources. This can be further losslessly compressed to 4 bits or even lower for 1.58-bit LLMs, which we leave as future work.
11 mar 2024 · The resultant values in theory can be represented with 1.58bits by information encoding theory. Since bits can’t be fractional we can represent them in 2 bits. Quantization Function Implementation in Pytorch. Threshold calculation: def compute_adjustment_factor(self, input_tensor: torch.Tensor): absmean_weight = torch.mean(torch.abs(input_tensor))
Recently proposed methods for 1-bit and 1.58-bit quantiza-tion aware training investigate the performance and behavior of these methods in the context of large language models, finding state-of-the-art performance for models with more than 3B parameters.