--- license: apache-2.0 task_categories: - question-answering - summarization - text-generation language: - en size_categories: - 1K All data in L-CiteEval follows the format below: ``` { "id": "The identifier for the data entry", "question": "The task question, such as for single-document QA. In summarization tasks, this may be omitted", "answer": "The correct or expected answer to the question, used for evaluating correctness", "docs": "Context divided into fixed-length chunks" "length": "The context length" "hardness": "The level of difficulty in L-CiteEval-Hardness, which can be easy, medium and hard" } ``` You can find evaluation code in our github. ## Citation If you find our work helpful, please cite our paper: ``` @misc{tang2024lciteeval, title={L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?}, author={Zecheng Tang and Keyan Zhou and Juntao Li and Baibei Ji and Jianye Hou and Min Zhang}, year={2024}, eprint={2410.02115}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```