Adding Evaluation Results

#3
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -11,4 +11,17 @@ Alpacino-13B + LLaMa-SuperCOT-13B (50%/50%)
11
  ## Original Models:
12
  Alpacino-13B: https://huggingface.co/digitous/Alpacino13b
13
 
14
- LLaMa-SuperCOT-13B: https://huggingface.co/ausboss/llama-13b-supercot
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ## Original Models:
12
  Alpacino-13B: https://huggingface.co/digitous/Alpacino13b
13
 
14
+ LLaMa-SuperCOT-13B: https://huggingface.co/ausboss/llama-13b-supercot
15
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
16
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_xzuyn__Alpacino-SuperCOT-13B)
17
+
18
+ | Metric | Value |
19
+ |-----------------------|---------------------------|
20
+ | Avg. | 46.8 |
21
+ | ARC (25-shot) | 58.36 |
22
+ | HellaSwag (10-shot) | 81.69 |
23
+ | MMLU (5-shot) | 47.89 |
24
+ | TruthfulQA (0-shot) | 45.42 |
25
+ | Winogrande (5-shot) | 76.95 |
26
+ | GSM8K (5-shot) | 7.51 |
27
+ | DROP (3-shot) | 9.78 |