wonhosong commited on
Commit
593d1eb
1 Parent(s): 32c6e9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -74,7 +74,7 @@ We evaluated our model on four benchmark datasets, which include `ARC-Challenge`
74
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
75
 
76
  ### Main Results
77
- | Model | H4 Avg | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
78
  |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
79
  | **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
80
  | Llama-2-70b-instruct (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |
 
74
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
75
 
76
  ### Main Results
77
+ | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
78
  |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
79
  | **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
80
  | Llama-2-70b-instruct (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |