leaderboard-pr-bot commited on
Commit
08b4add
1 Parent(s): 0d4d537

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +111 -3
README.md CHANGED
@@ -1,11 +1,105 @@
1
  ---
2
- base_model:
3
- - Replete-AI/Replete-LLM-V2.5-Qwen-7b
4
  library_name: transformers
5
  tags:
6
  - mergekit
7
  - merge
8
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
  # merge
11
 
@@ -177,3 +271,17 @@ slices:
177
  merge_method: passthrough
178
  dtype: float16
179
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  library_name: transformers
3
  tags:
4
  - mergekit
5
  - merge
6
+ base_model:
7
+ - Replete-AI/Replete-LLM-V2.5-Qwen-7b
8
+ model-index:
9
+ - name: Brimful-merged-replete
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ name: Text Generation
14
+ dataset:
15
+ name: IFEval (0-Shot)
16
+ type: HuggingFaceH4/ifeval
17
+ args:
18
+ num_few_shot: 0
19
+ metrics:
20
+ - type: inst_level_strict_acc and prompt_level_strict_acc
21
+ value: 17.61
22
+ name: strict accuracy
23
+ source:
24
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Solshine/Brimful-merged-replete
25
+ name: Open LLM Leaderboard
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: BBH (3-Shot)
31
+ type: BBH
32
+ args:
33
+ num_few_shot: 3
34
+ metrics:
35
+ - type: acc_norm
36
+ value: 1.99
37
+ name: normalized accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Solshine/Brimful-merged-replete
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: MATH Lvl 5 (4-Shot)
46
+ type: hendrycks/competition_math
47
+ args:
48
+ num_few_shot: 4
49
+ metrics:
50
+ - type: exact_match
51
+ value: 0.0
52
+ name: exact match
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Solshine/Brimful-merged-replete
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: GPQA (0-shot)
61
+ type: Idavidrein/gpqa
62
+ args:
63
+ num_few_shot: 0
64
+ metrics:
65
+ - type: acc_norm
66
+ value: 1.01
67
+ name: acc_norm
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Solshine/Brimful-merged-replete
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: MuSR (0-shot)
76
+ type: TAUR-Lab/MuSR
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 1.43
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Solshine/Brimful-merged-replete
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MMLU-PRO (5-shot)
91
+ type: TIGER-Lab/MMLU-Pro
92
+ config: main
93
+ split: test
94
+ args:
95
+ num_few_shot: 5
96
+ metrics:
97
+ - type: acc
98
+ value: 0.94
99
+ name: accuracy
100
+ source:
101
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Solshine/Brimful-merged-replete
102
+ name: Open LLM Leaderboard
103
  ---
104
  # merge
105
 
 
271
  merge_method: passthrough
272
  dtype: float16
273
  ```
274
+
275
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
276
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Solshine__Brimful-merged-replete)
277
+
278
+ | Metric |Value|
279
+ |-------------------|----:|
280
+ |Avg. | 3.83|
281
+ |IFEval (0-Shot) |17.61|
282
+ |BBH (3-Shot) | 1.99|
283
+ |MATH Lvl 5 (4-Shot)| 0.00|
284
+ |GPQA (0-shot) | 1.01|
285
+ |MuSR (0-shot) | 1.43|
286
+ |MMLU-PRO (5-shot) | 0.94|
287
+