leaderboard-pr-bot's picture
Adding Evaluation Results
85325b5
|
raw
history blame
736 Bytes
metadata
library_name: transformers
pipeline_tag: text-generation

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.12
ARC (25-shot) 25.77
HellaSwag (10-shot) 25.67
MMLU (5-shot) 27.0
TruthfulQA (0-shot) 48.21
Winogrande (5-shot) 49.17
GSM8K (5-shot) 0.0
DROP (3-shot) 0.0