codelion commited on
Commit
d09b442
1 Parent(s): 859ea6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -2
README.md CHANGED
@@ -68,18 +68,63 @@ weights = {
68
  At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the
69
  leaderboard please create a PR with the log file of the run and details about the model.
70
 
71
- If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.6 on this benchmark.
72
  We can validate it by running the evaluation script with `--oracle` flag.
73
  The oracle run log is available [here](oracle_results_20240912_155859.log).
74
 
75
  # Leaderboard
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  | Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
78
  |:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
 
79
  | mistral-nemo-instruct-2407 | 25.62 | 1.09 | 11.24 | 1.70 | 10.94 | 26.62 | 24.26 | 52.00 | **8.80** | 37.30 | [link](mistral-nemo-12b-instruct-2407-fp16_results_20240912_182234.log) |
80
  | gpt-4o-mini-2024-07-18 | 32.16 | 1.64 | 15.46 | 3.85 | 14.84 | 40.57 | 23.81 | 72.50 | 4.77 | 44.81 | [link](gpt-4o-mini-2024-07-18_results_20240912_161045.log) |
81
  | gpt-4o-2024-08-06 | 33.13 | 1.68 | 15.36 | 3.59 | 14.81 | 40.00 | 23.91 | 74.50 | 8.36 | 44.33 | [link](gpt-4o-2024-08-06_results_20240912_155645.log) |
82
  | gemini-1.5-flash-8b-exp-0827 | 32.12 | 1.36 | 14.66 | 3.31 | 14.14 | 38.31 | 23.00 | 70.00 | 7.43 | **46.47** | [link](gemini-1.5-flash-8b-exp-0827_results_20240912_134026.log) |
83
  | **gemini-1.5-flash-exp-0827** | **33.43** | 1.66 | **16.00** | 3.88 | **15.33** | **41.87** | 23.59 | **76.50** | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
84
  | gemini-1.5-pro-exp-0827 | 32.51 | **2.55** | 15.27 | **4.97** | 14.86 | 41.09 | **23.94** | 72.82 | 6.73 | 43.34 | [link](gemini-1.5-pro-exp-0827_results_20240912_141225.log) |
85
- | oracle-score | 56.79 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 98.24 | 59.00 | 11.01 | 14.84 | [link](oracle_results_20240912_155859.log) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the
69
  leaderboard please create a PR with the log file of the run and details about the model.
70
 
71
+ If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.79 on this benchmark.
72
  We can validate it by running the evaluation script with `--oracle` flag.
73
  The oracle run log is available [here](oracle_results_20240912_155859.log).
74
 
75
  # Leaderboard
76
 
77
+ The current SOTA model on this benchmark in zero shot setting is **Gemini-1.5-Flash-Exp-0827**.
78
+ It scores the highest across a number of different metrics.
79
+
80
+ bleu: 0.0072
81
+ rouge-1: 0.1196
82
+ rouge-2: 0.0169
83
+ rouge-l: 0.1151
84
+ cosine_similarity: 0.3029
85
+ structural_similarity: 0.2416
86
+ information_retrieval: 0.4450
87
+ code_consistency: 0.0796
88
+ readability: 0.3790
89
+ weighted_score: 0.2443
90
+
91
  | Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
92
  |:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
93
+ | llama3.1-8b-instruct | 24.43 | 0.72 | 11.96 | 1.69 | 11.51 | 30.29 | 24.16 | 44.50 | 7.96 | 37.90 | [link](llama3.1-8b-instruct-fp16_results_20240912_185437.log) |
94
  | mistral-nemo-instruct-2407 | 25.62 | 1.09 | 11.24 | 1.70 | 10.94 | 26.62 | 24.26 | 52.00 | **8.80** | 37.30 | [link](mistral-nemo-12b-instruct-2407-fp16_results_20240912_182234.log) |
95
  | gpt-4o-mini-2024-07-18 | 32.16 | 1.64 | 15.46 | 3.85 | 14.84 | 40.57 | 23.81 | 72.50 | 4.77 | 44.81 | [link](gpt-4o-mini-2024-07-18_results_20240912_161045.log) |
96
  | gpt-4o-2024-08-06 | 33.13 | 1.68 | 15.36 | 3.59 | 14.81 | 40.00 | 23.91 | 74.50 | 8.36 | 44.33 | [link](gpt-4o-2024-08-06_results_20240912_155645.log) |
97
  | gemini-1.5-flash-8b-exp-0827 | 32.12 | 1.36 | 14.66 | 3.31 | 14.14 | 38.31 | 23.00 | 70.00 | 7.43 | **46.47** | [link](gemini-1.5-flash-8b-exp-0827_results_20240912_134026.log) |
98
  | **gemini-1.5-flash-exp-0827** | **33.43** | 1.66 | **16.00** | 3.88 | **15.33** | **41.87** | 23.59 | **76.50** | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
99
  | gemini-1.5-pro-exp-0827 | 32.51 | **2.55** | 15.27 | **4.97** | 14.86 | 41.09 | **23.94** | 72.82 | 6.73 | 43.34 | [link](gemini-1.5-pro-exp-0827_results_20240912_141225.log) |
100
+ | oracle-score | 56.79 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 98.24 | 59.00 | 11.01 | 14.84 | [link](oracle_results_20240912_155859.log) |
101
+
102
+ ## Few-Shot
103
+
104
+ This benchmark is interesting because it is not that easy to few-shot your way to improve performance. There are couple of reasons for that:
105
+
106
+ 1) The average context length required for each item can be up to 100k tokens which makes it out of the reach of most
107
+ models except Google Gemini which has a context legnth of up to 2 Million tokens.
108
+
109
+ 2) There is a trade-off in accuracy inherit in the benchmark as adding more examples makes some of the metrics like `information_retrieval`
110
+ and `readability` worse. At larger contexts models do not have perfect recall and may miss important information.
111
+
112
+ Our experiments with few-shot prompts confirm this, there is 1
113
+
114
+ bleu: 0.1924
115
+ rouge-1: 0.3231
116
+ rouge-2: 0.2148
117
+ rouge-l: 0.3174
118
+ cosine_similarity: 0.6149
119
+ structural_similarity: 0.3317
120
+ information_retrieval: 0.5950
121
+ code_consistency: 0.1148
122
+ readability: 0.2765
123
+ weighted_score: 0.3397
124
+
125
+ | Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
126
+ |:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
127
+ | 0-shot-gemini-1.5-flash-exp-0827 | 33.43 | 1.66 | 16.00 | 3.88 | 15.33 | 41.87 | 23.59 | 76.50 | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
128
+ | 1-shot-gemini-1.5-flash-exp-0827 | 35.40 | 21.81 | 34.00 | 24.97 | 33.61 | 61.53 | 37.60 | 61.00 | 12.89 | 27.22 | [link](1-shot-gemini-1.5-flash-exp-0827_results_20240912_183343.log) |
129
+ | 3-shot-gemini-1.5-flash-exp-0827 | 33.43 | 1.66 | 16.00 | 3.88 | 15.33 | 41.87 | 23.59 | 76.50 | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
130
+ | 5-shot-gemini-1.5-flash-exp-0827 | 33.97 | 19.24 | 32.31 | 21.48 | 31.74 | 61.49 | 33.17 | 59.50 | 11.48 | 27.65 | [link](5-shot-gemini-1.5-flash-exp-0827_results_20240912_180343.log) |