Bazsalanszky commited on
Commit
d02447f
1 Parent(s): d044810
Files changed (2) hide show
  1. README.md +1 -1
  2. src/about.py +13 -4
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: Hunbench Test
3
  emoji: 🥇
4
  colorFrom: green
5
  colorTo: indigo
 
1
  ---
2
+ title: HunEval
3
  emoji: 🥇
4
  colorFrom: green
5
  colorTo: indigo
src/about.py CHANGED
@@ -26,18 +26,20 @@ NUM_FEWSHOT = 0 # Change with your few shot
26
 
27
 
28
  # Your leaderboard name
29
- TITLE = """<h1 align="center" id="space-title">HunBench leaderboard</h1>"""
30
 
31
  # What does your leaderboard evaluate?
32
  INTRODUCTION_TEXT = """
33
- This leaderboard evaluates the performance of models on the HunBench benchmark. The goal of this benchmark is to evaluate the performance of models on tasks that require a good understanding of the Hungarian language. The benchmark has two key parts. The first one aims to capture the language understanding capabilities of the model, while the second one focuses on the knowledge of the model. The benchmark is divided into several tasks, each evaluating a different aspect of the model's performance. The leaderboard is sorted by the average score of the model on all tasks.
34
  """
35
 
36
  # Which evaluations are you running? how can people reproduce what you have?
37
  LLM_BENCHMARKS_TEXT = """
38
  ## How it works
39
- TODO
40
- ## Reproducibility
 
 
41
  TODO
42
 
43
  """
@@ -47,5 +49,12 @@ TODO
47
  """
48
 
49
  CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
 
50
  CITATION_BUTTON_TEXT = r"""
 
 
 
 
 
 
51
  """
 
26
 
27
 
28
  # Your leaderboard name
29
+ TITLE = """<h1 align="center" id="space-title">HunEval leaderboard</h1>"""
30
 
31
  # What does your leaderboard evaluate?
32
  INTRODUCTION_TEXT = """
33
+ This leaderboard evaluates the performance of models on the HunEval benchmark. The goal of this benchmark is to evaluate the performance of models on tasks that require a good understanding of the Hungarian language. The benchmark has two key parts. The first one aims to capture the language understanding capabilities of the model, while the second one focuses on the knowledge of the model. The benchmark is divided into several tasks, each evaluating a different aspect of the model's performance. While designing the benchmark, we aimed to create tasks that very is if not obvious, for a native Hungarian speaker, or someone that has lived in Hungary for a long time, but might be challenging for a model that has not been trained on Hungarian data. This means if a model was trained on Hungarian data, it should perform well on the benchmark, but if it was not, it should might struggle.
34
  """
35
 
36
  # Which evaluations are you running? how can people reproduce what you have?
37
  LLM_BENCHMARKS_TEXT = """
38
  ## How it works
39
+ The benhmark is devided into several tasks, including: history, logic (testing the knowledge of the models), grammar, sayings, spelling, and vocabulary (testing the language understanding capabilities of the models). Each task contains an instruction or question, and a set of four possible answers. The model is given a system
40
+ prompt, which aims to add CoT reasoning before providing an answer. This makes the improves the results for most of the models, while also making the benchmark more consistent.
41
+
42
+ ## Reproducing the results
43
  TODO
44
 
45
  """
 
49
  """
50
 
51
  CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
52
+ # Citation text for HunEval by Balázs Ádám Toldi, 2024, inprogress
53
  CITATION_BUTTON_TEXT = r"""
54
+ @misc{toldi2024huneval,
55
+ title={HunEval},
56
+ author={Balázs Ádám Toldi},
57
+ year={2024},
58
+ howpublished={\url{https://huggingface.co/spaces/Bazsalanszky/huneval}}
59
+ }
60
  """