--- license: apache-2.0 task_categories: - question-answering - summarization - text-generation language: - en size_categories: - 1KGithub   **Zhihu** [![Zhihu](https://img.shields.io/badge/知乎-0079FF.svg?style=plastic&logo=zhihu&logoColor=white)](https://zhuanlan.zhihu.com/p/817442176) ## Benchmark Quickview *L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**. ![](assets/dataset.png) ## Data Prepare #### Load Data ``` from datasets import load_dataset datasets = ["narrativeqa", "natural_questions", "hotpotqa", "2wikimultihopqa", "goc_report", "multi_news", "qmsum", "locomo", "dialsim", "counting_stars", "niah"] for dataset in datasets: ### Load L-CiteEval data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Data_{dataset}") ### Load L-CiteEval-Length data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Length_{dataset}") ### Load L-CiteEval-Hardness data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Hardness_{dataset}") ``` All data in L-CiteEval follows the format below: ``` { "id": "The identifier for the data entry", "question": "The task question, such as for single-document QA. In summarization tasks, this may be omitted", "answer": "The correct or expected answer to the question, used for evaluating correctness", "docs": "Context divided into fixed-length chunks" "length": "The context length" "hardness": "The level of difficulty in L-CiteEval-Hardness, which can be easy, medium and hard" } ``` You can find evaluation code in our github. ## Citation If you find our work helpful, please cite our paper: ``` @misc{tang2024lciteeval, title={L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?}, author={Zecheng Tang and Keyan Zhou and Juntao Li and Baibei Ji and Jianye Hou and Min Zhang}, year={2024}, eprint={2410.02115}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```