-
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 23 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 10 -
Attention Is All You Need
Paper • 1706.03762 • Published • 42 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 8
Collections
Discover the best community collections!
Collections including paper arxiv:2302.13971
-
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 9 -
Attention Is All You Need
Paper • 1706.03762 • Published • 42 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 29 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 10
-
Attention Is All You Need
Paper • 1706.03762 • Published • 42 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
Adapting Large Language Models via Reading Comprehension
Paper • 2309.09530 • Published • 75 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 13 -
Finetuned Language Models Are Zero-Shot Learners
Paper • 2109.01652 • Published • 2 -
LIMA: Less Is More for Alignment
Paper • 2305.11206 • Published • 21
-
SMOTE: Synthetic Minority Over-sampling Technique
Paper • 1106.1813 • Published • 1 -
Scikit-learn: Machine Learning in Python
Paper • 1201.0490 • Published • 1 -
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Paper • 1406.1078 • Published -
Distributed Representations of Sentences and Documents
Paper • 1405.4053 • Published
-
Attention Is All You Need
Paper • 1706.03762 • Published • 42 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Attention Is All You Need
Paper • 1706.03762 • Published • 42 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 9 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 70