Papers
arxiv:2402.11295

OneBit: Towards Extremely Low-bit Large Language Models

Published on Feb 17
Β· Submitted by akhaliq on Feb 20
Authors:
,
,
,
,

Abstract

Model quantification uses low bit-width values to represent the weight matrices of models, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs. However, existing quantization methods suffer severe performance degradation when the bit-width is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to quantize models. This paper boldly quantizes the weight matrices of LLMs to 1-bit, paving the way for the extremely low bit-width deployment of LLMs. For this target, we introduce a 1-bit quantization-aware training (QAT) framework named OneBit, including a novel 1-bit parameter representation method to better quantize LLMs as well as an effective parameter initialization method based on matrix decomposition to improve the convergence speed of the QAT framework. Sufficient experimental results indicate that OneBit achieves good performance (at least 83% of the non-quantized performance) with robust training processes when only using 1-bit weight matrices.

Community

@librarian-bot recommend

Looks like their results are better than BiLLM. They only go up to 13B in the paper, perhaps 70B would be more favorable.

Β·
This comment has been hidden

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Great work! Are there any plans to release the code and/or model checkpoints?

Β·
Paper author

We will release the training code and LLaMA checkpoints as soon as possible, so stay tuned! : )

Paper author
This comment has been hidden
Paper author

Code and checkpoints are available at https://github.com/xuyuzhuang11/OneBit
Welcome to join us to explore the 1-bit LLM world! πŸ€—πŸ€—πŸ€—

Revolutionizing Large Language Models: OneBit's 1-Bit Quantization Breakthrough

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Β·
Paper author

πŸ€—πŸ€—πŸ€—Thanks for your attention!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.11295 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.11295 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.11295 in a Space README.md to link it from this page.

Collections including this paper 13