Xclbr7 commited on
Commit
ae1ad33
•
1 Parent(s): 1b97e29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -119,6 +119,19 @@ Arcanum-12b is a merged large language model created by combining TheDrummer/Roc
119
  - **Parameter count:** ~12 billion
120
  - **Architecture specifics:** Transformer-based language model
121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
  ## Training & Merging 🔄
123
 
124
  Arcanum-12b was created by merging two existing 12B models:
@@ -158,16 +171,4 @@ We acknowledge the contributions of the original model creators:
158
 
159
  Their work formed the foundation for Arcanum-12b.
160
 
161
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
162
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Xclbr7__Arcanum-12b)
163
-
164
- | Metric |Value|
165
- |-------------------|----:|
166
- |Avg. |20.48|
167
- |IFEval (0-Shot) |29.07|
168
- |BBH (3-Shot) |31.88|
169
- |MATH Lvl 5 (4-Shot)|10.27|
170
- |GPQA (0-shot) | 9.40|
171
- |MuSR (0-shot) |13.53|
172
- |MMLU-PRO (5-shot) |28.74|
173
 
 
119
  - **Parameter count:** ~12 billion
120
  - **Architecture specifics:** Transformer-based language model
121
 
122
+ ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
123
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Xclbr7__Arcanum-12b)
124
+
125
+ | Metric |Value|
126
+ |-------------------|----:|
127
+ |Avg. |20.48|
128
+ |IFEval (0-Shot) |29.07|
129
+ |BBH (3-Shot) |31.88|
130
+ |MATH Lvl 5 (4-Shot)|10.27|
131
+ |GPQA (0-shot) | 9.40|
132
+ |MuSR (0-shot) |13.53|
133
+ |MMLU-PRO (5-shot) |28.74|
134
+
135
  ## Training & Merging 🔄
136
 
137
  Arcanum-12b was created by merging two existing 12B models:
 
171
 
172
  Their work formed the foundation for Arcanum-12b.
173
 
 
 
 
 
 
 
 
 
 
 
 
 
174