llama2_7b_code / README.md
itsliupeng's picture
Adding Evaluation Results (#1)
7473673
|
raw
history blame contribute delete
No virus
657 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 42.81
ARC (25-shot) 52.13
HellaSwag (10-shot) 75.71
MMLU (5-shot) 48.05
TruthfulQA (0-shot) 38.76
Winogrande (5-shot) 71.51
GSM8K (5-shot) 8.11
DROP (3-shot) 5.39