-
Addition is All You Need for Energy-efficient Language Models
Paper • 2410.00907 • Published • 129 -
Emu3: Next-Token Prediction is All You Need
Paper • 2409.18869 • Published • 81 -
An accurate detection is not all you need to combat label noise in web-noisy datasets
Paper • 2407.05528 • Published • 3 -
Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP
Paper • 2407.00402 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2404.07965
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 8 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 99 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 109
-
Associative Recurrent Memory Transformer
Paper • 2407.04841 • Published • 31 -
Mixture-of-Agents Enhances Large Language Model Capabilities
Paper • 2406.04692 • Published • 54 -
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Paper • 2405.21060 • Published • 63 -
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 251
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 83 -
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
Paper • 2404.10667 • Published • 15 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 24 -
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper • 2402.09353 • Published • 25
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 83 -
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 64 -
Compression Represents Intelligence Linearly
Paper • 2404.09937 • Published • 27 -
Multi-Head Mixture-of-Experts
Paper • 2404.15045 • Published • 59