Papers
arxiv:2401.04658

Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models

Published on Jan 9
Β· Submitted by akhaliq on Jan 10
#3 Paper of the day

Abstract

Linear attention is an efficient attention mechanism that has recently emerged as a promising alternative to conventional softmax attention. With its ability to process tokens in linear computational complexities, linear attention, in theory, can handle sequences of unlimited length without sacrificing speed, i.e., maintaining a constant training speed for various sequence lengths with a fixed memory consumption. However, due to the issue with cumulative summation (cumsum), current linear attention algorithms cannot demonstrate their theoretical advantage in a causal setting. In this paper, we present Lightning Attention-2, the first linear attention implementation that enables linear attention to realize its theoretical computational benefits. To achieve this, we leverage the thought of tiling, separately handling the intra-block and inter-block components in linear attention calculation. Specifically, we utilize the conventional attention computation mechanism for the intra-blocks and apply linear attention kernel tricks for the inter-blocks. A tiling technique is adopted through both forward and backward procedures to take full advantage of the GPU hardware. We implement our algorithm in Triton to make it IO-aware and hardware-friendly. Various experiments are conducted on different model sizes and sequence lengths. Lightning Attention-2 retains consistent training and inference speed regardless of input sequence length and is significantly faster than other attention mechanisms. The source code is available at https://github.com/OpenNLPLab/lightning-attention.

Community

Summary:
1.5x faster at 8k context
3x faster at 32k context
similar quality as normal attention
requires pre-training from scratch

Summary:
1.5x faster at 8k context
3x faster at 32k context
similar quality as normal attention
requires pre-training from scratch

I don't think that summary does this justice

My takeaway from this was that, the inference time per token is identical regardless of the context length!

32k contexts have the same inference speed (per token) than 1024 context lengths

That's a huge breakthrough

Unlocking Unlimited Sequence Lengths: Introducing Lightning Attention-2!

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.04658 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2401.04658 in a Space README.md to link it from this page.

Collections including this paper 12