Jonaszky123 commited on
Commit
23678a2
1 Parent(s): a252735

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -96,12 +96,13 @@ configs:
96
  ---
97
 
98
  # L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
99
- **Paper** [![arXiv](https://img.shields.io/badge/arXiv-2410.02115-b31b1b.svg?style=plastic)](https://arxiv.org/abs/2410.02115) &ensp; **Github** <a href="https://github.com/ZetangForward/L-CITEEVAL"><img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="Github" width="40" height="40"></a> &ensp; **Zhihu** [![Zhihu](https://img.shields.io/badge/知乎-0079FF.svg?style=plastic&logo=zhihu&logoColor=white)](https://zhuanlan.zhihu.com/p/817442176)
100
 
101
  ## Benchmark Quickview
102
  *L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
103
  ![](assets/dataset.png)
104
 
 
105
  ## Data Prepare
106
  #### Load Data
107
  ```
 
96
  ---
97
 
98
  # L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
99
+ [Paper](https://arxiv.org/abs/2410.02115) &ensp; [Github](https://github.com/ZetangForward/L-CITEEVAL) &ensp; [Zhihu](https://zhuanlan.zhihu.com/p/817442176)
100
 
101
  ## Benchmark Quickview
102
  *L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
103
  ![](assets/dataset.png)
104
 
105
+
106
  ## Data Prepare
107
  #### Load Data
108
  ```