HunEval / src /about.py
Bazsalanszky's picture
Test
d02447f
raw
history blame
2.84 kB
from dataclasses import dataclass
from enum import Enum
@dataclass
class Task:
benchmark: str
metric: str
col_name: str
# Select your tasks here
# ---------------------------------------------------
class Tasks(Enum):
# task_key in the json file, metric_key in the json file, name to display in the leaderboard
task0 = Task("history","score", "History")
task1 = Task("grammar","score", "Grammar")
task2 = Task("logic","score", "Logic")
task3 = Task("sayings","score", "Sayings")
task4 = Task("spelling","score", "Spelling")
task5 = Task("vocabulary","score", "Vocabulary")
NUM_FEWSHOT = 0 # Change with your few shot
# ---------------------------------------------------
# Your leaderboard name
TITLE = """<h1 align="center" id="space-title">HunEval leaderboard</h1>"""
# What does your leaderboard evaluate?
INTRODUCTION_TEXT = """
This leaderboard evaluates the performance of models on the HunEval benchmark. The goal of this benchmark is to evaluate the performance of models on tasks that require a good understanding of the Hungarian language. The benchmark has two key parts. The first one aims to capture the language understanding capabilities of the model, while the second one focuses on the knowledge of the model. The benchmark is divided into several tasks, each evaluating a different aspect of the model's performance. While designing the benchmark, we aimed to create tasks that very is if not obvious, for a native Hungarian speaker, or someone that has lived in Hungary for a long time, but might be challenging for a model that has not been trained on Hungarian data. This means if a model was trained on Hungarian data, it should perform well on the benchmark, but if it was not, it should might struggle.
"""
# Which evaluations are you running? how can people reproduce what you have?
LLM_BENCHMARKS_TEXT = """
## How it works
The benhmark is devided into several tasks, including: history, logic (testing the knowledge of the models), grammar, sayings, spelling, and vocabulary (testing the language understanding capabilities of the models). Each task contains an instruction or question, and a set of four possible answers. The model is given a system
prompt, which aims to add CoT reasoning before providing an answer. This makes the improves the results for most of the models, while also making the benchmark more consistent.
## Reproducing the results
TODO
"""
EVALUATION_QUEUE_TEXT = """
TODO
"""
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
# Citation text for HunEval by Balázs Ádám Toldi, 2024, inprogress
CITATION_BUTTON_TEXT = r"""
@misc{toldi2024huneval,
title={HunEval},
author={Balázs Ádám Toldi},
year={2024},
howpublished={\url{https://huggingface.co/spaces/Bazsalanszky/huneval}}
}
"""