from dataclasses import dataclass from enum import Enum @dataclass class Task: benchmark: str metric: str col_name: str # Select your tasks here # --------------------------------------------------- class Tasks(Enum): # task_key in the json file, metric_key in the json file, name to display in the leaderboard task0 = Task("history","score", "History") task1 = Task("grammar","score", "Grammar") task2 = Task("logic","score", "Logic") task3 = Task("sayings","score", "Sayings") task4 = Task("spelling","score", "Spelling") task5 = Task("vocabulary","score", "Vocabulary") NUM_FEWSHOT = 0 # Change with your few shot # --------------------------------------------------- # Your leaderboard name TITLE = """

HunEval leaderboard

""" # What does your leaderboard evaluate? INTRODUCTION_TEXT = """ The HunEval leaderboard aims to evaluate models based on their proficiency in understanding and processing the Hungarian language. This benchmark focuses on two key areas: (1) linguistic comprehension, which measures a model's ability to interpret Hungarian text, and (2) knowledge-based tasks, which assess a model's familiarity with Hungarian history and cultural aspects. The benchmark includes multiple sub-tasks, each targeting a different facet of language understanding and knowledge. My goal was to create tasks that are straightforward for native Hungarian speakers or individuals with deep familiarity with the language, but potentially challenging for models without specific training on Hungarian data. I expect models trained on Hungarian datasets to perform well, while those without such training may struggle. A strong performance on this benchmark indicates proficiency in Hungarian language structures and knowledge, but not necessarily expertise in specific tasks. **Please note that this benchmark is a Proof of Concept and not a comprehensive evaluation of a model's capabilities.** I invite participants to engage with the benchmark and provide feedback for future improvements. """ # Which evaluations are you running? how can people reproduce what you have? LLM_BENCHMARKS_TEXT = """ ## How it works The benhmark is devided into several tasks, including: history, logic (testing the knowledge of the models), grammar, sayings, spelling, and vocabulary (testing the language understanding capabilities of the models). Each task contains an instruction or question, and a set of four possible answers. The model is given a system prompt, which aims to add CoT reasoning before providing an answer. This makes the improves the results for most of the models, while also making the benchmark more consistent. An answer is considered correct if it matches the correct answer in the set of possible answers. The task is given to the model three times. If it answers correctly at least once, it is considered correct. The final score is the number of correct answers divided by the number of tasks. To run the evaluation, I gave the model 2048 tokens to generate the answer and 0.0 was used as the temperature. ## Reproducing the results TODO ## Evaluation In the current version of the benchmark, some models, (ones that were most likely trained on Hungarian data) perform very well (maybe a bit too well?), while others, (ones that were not trained on Hungarian data) perform poorly. This may indicate that in the future, more challenging tasks should be added to the benchmark to make it a more accurate representation of the models' capabilities in Hungarian language understanding and knowledge. """ EVALUATION_QUEUE_TEXT = """ TODO """ CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" # Citation text for HunEval by Balázs Ádám Toldi, 2024, inprogress CITATION_BUTTON_TEXT = r""" @misc{toldi2024huneval, title={HunEval}, author={Balázs Ádám Toldi}, year={2024}, howpublished={\url{https://huggingface.co/spaces/Bazsalanszky/huneval}} } """