Jonaszky123 commited on
Commit
f009860
1 Parent(s): 6a0289d

Upload Readme

Browse files
Files changed (1) hide show
  1. README.md +2 -15
README.md CHANGED
@@ -96,20 +96,7 @@ configs:
96
  ---
97
 
98
  # L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
99
- <p>
100
- <strong>Paper</strong>
101
- <a href="https://arxiv.org/abs/2410.02115">
102
- <img src="https://img.shields.io/badge/arXiv-2410.02115-b31b1b.svg?style=plastic" style="vertical-align: middle;">
103
- </a>
104
- &nbsp; <strong>Github</strong>
105
- <a href="https://github.com/ZetangForward/L-CITEEVAL">
106
- <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="Github" width="40" height="40" style="vertical-align: middle;">
107
- </a>
108
- &nbsp; <strong>Zhihu</strong>
109
- <a href="https://zhuanlan.zhihu.com/p/817442176">
110
- <img src="https://img.shields.io/badge/知乎-0079FF.svg?style=plastic&logo=zhihu&logoColor=white" style="vertical-align: middle;">
111
- </a>
112
- </p>
113
 
114
  ## Benchmark Quickview
115
  *L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
@@ -144,7 +131,7 @@ All data in L-CiteEval follows the format below:
144
  "answer": "The correct or expected answer to the question, used for evaluating correctness",
145
  "docs": "Context divided into fixed-length chunks"
146
  "length": "The context length"
147
- "hardness": "The level of diffulty in L-CiteEval-Hardness, which can be easy, medium and hard"
148
  }
149
  ```
150
 
 
96
  ---
97
 
98
  # L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
99
+ **Paper** [![arXiv](https://img.shields.io/badge/arXiv-2410.02115-b31b1b.svg?style=plastic)](https://arxiv.org/abs/2410.02115) &nbsp; **Github** <a href="https://github.com/ZetangForward/L-CITEEVAL"><img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="Github" width="40" height="40"></a> &nbsp; **Zhihu** [![Zhihu](https://img.shields.io/badge/知乎-0079FF.svg?style=plastic&logo=zhihu&logoColor=white)](https://zhuanlan.zhihu.com/p/817442176)
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
  ## Benchmark Quickview
102
  *L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
 
131
  "answer": "The correct or expected answer to the question, used for evaluating correctness",
132
  "docs": "Context divided into fixed-length chunks"
133
  "length": "The context length"
134
+ "hardness": "The level of difficulty in L-CiteEval-Hardness, which can be easy, medium and hard"
135
  }
136
  ```
137