L-CiteEval / README.md
Jonaszky123's picture
Upload Readme
f009860
|
raw
history blame
5.27 kB
metadata
license: apache-2.0
task_categories:
  - question-answering
  - summarization
  - text-generation
language:
  - en
size_categories:
  - 1K<n<10K
configs:
  - config_name: L-CiteEval-Data_narrativeqa
    data_files:
      - split: test
        path: L-CiteEval-Data/narrativeqa.json
  - config_name: L-CiteEval-Data_natural_questions
    data_files:
      - split: test
        path: L-CiteEval-Data/natural_questions.json
  - config_name: L-CiteEval-Data_hotpotqa
    data_files:
      - split: test
        path: L-CiteEval-Data/hotpotqa.json
  - config_name: L-CiteEval-Data_2wikimultihopqa
    data_files:
      - split: test
        path: L-CiteEval-Data/2wikimultihopqa.json
  - config_name: L-CiteEval-Data_gov_report
    data_files:
      - split: test
        path: L-CiteEval-Data/gov_report.json
  - config_name: L-CiteEval-Data_multi_news
    data_files:
      - split: test
        path: L-CiteEval-Data/multi_news.json
  - config_name: L-CiteEval-Data_qmsum
    data_files:
      - split: test
        path: L-CiteEval-Data/qmsum.json
  - config_name: L-CiteEval-Data_locomo
    data_files:
      - split: test
        path: L-CiteEval-Data/locomo.json
  - config_name: L-CiteEval-Data_dialsim
    data_files:
      - split: test
        path: L-CiteEval-Data/dialsim.json
  - config_name: L-CiteEval-Data_niah
    data_files:
      - split: test
        path: L-CiteEval-Data/niah.json
  - config_name: L-CiteEval-Data_counting_stars
    data_files:
      - split: test
        path: L-CiteEval-Data/counting_stars.json
  - config_name: L-CiteEval-Length_narrativeqa
    data_files:
      - split: test
        path: L-CiteEval-Length/narrativeqa.json
  - config_name: L-CiteEval-Length_hotpotqa
    data_files:
      - split: test
        path: L-CiteEval-Length/hotpotqa.json
  - config_name: L-CiteEval-Length_gov_report
    data_files:
      - split: test
        path: L-CiteEval-Length/gov_report.json
  - config_name: L-CiteEval-Length_locomo
    data_files:
      - split: test
        path: L-CiteEval-Length/locomo.json
  - config_name: L-CiteEval-Length_counting_stars
    data_files:
      - split: test
        path: L-CiteEval-Length/counting_stars.json
  - config_name: L-CiteEval-Hardness_narrativeqa
    data_files:
      - split: test
        path: L-CiteEval-Hardness/narrativeqa.json
  - config_name: L-CiteEval-Hardness_hotpotqa
    data_files:
      - split: test
        path: L-CiteEval-Hardness/hotpotqa.json
  - config_name: L-CiteEval-Hardness_gov_report
    data_files:
      - split: test
        path: L-CiteEval-Hardness/gov_report.json
  - config_name: L-CiteEval-Hardness_locomo
    data_files:
      - split: test
        path: L-CiteEval-Hardness/locomo.json
  - config_name: L-CiteEval-Hardness_counting_stars
    data_files:
      - split: test
        path: L-CiteEval-Hardness/counting_stars.json

L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?

Paper arXiv   Github Github   Zhihu Zhihu

Benchmark Quickview

L-CiteEval is a multi-task long-context understanding with citation benchmark, covering 5 task categories, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing 11 different long-context tasks. The context lengths for these tasks range from 8K to 48K.

Data Prepare

Load Data

from datasets import load_dataset

datasets = ["narrativeqa", "natural_questions", "hotpotqa", "2wikimultihopqa", "goc_report", "multi_news", "qmsum", "locomo", "dialsim", "counting_stars", "niah"]

for dataset in datasets:
    ### Load L-CiteEval
    data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Data_{dataset}")

    ### Load L-CiteEval-Length
    data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Length_{dataset}")

    ### Load L-CiteEval-Hardness
    data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Hardness_{dataset}")

All data in L-CiteEval follows the format below:

{
    "id": "The identifier for the data entry",
    "question": "The task question, such as for single-document QA. In summarization tasks, this may be omitted",
    "answer": "The correct or expected answer to the question, used for evaluating correctness",
    "docs": "Context divided into fixed-length chunks"
    "length": "The context length"
    "hardness": "The level of difficulty in L-CiteEval-Hardness, which can be easy, medium and hard"
}

You can find evaluation code in our github.

Citation

If you find our work helpful, please cite our paper:

@misc{tang2024lciteeval,
    title={L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?},
    author={Zecheng Tang and Keyan Zhou and Juntao Li and Baibei Ji and Jianye Hou and Min Zhang},
    year={2024},
    eprint={2410.02115},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}