Jonaszky123 commited on
Commit
6a0289d
1 Parent(s): a899404

Upload Readme

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -96,7 +96,20 @@ configs:
96
  ---
97
 
98
  # L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
99
- **Paper** [![arXiv](https://img.shields.io/badge/arXiv-2410.02115-b31b1b.svg?style=plastic)](https://arxiv.org/abs/2410.02115) &nbsp; **Github** <a href="https://github.com/ZetangForward/L-CITEEVAL"><img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="Github" width="40" height="40"></a> &nbsp;**Zhihu** [![Zhihu](https://img.shields.io/badge/知乎-0079FF.svg?style=plastic&logo=zhihu&logoColor=white)](https://zhuanlan.zhihu.com/p/817442176)
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
  ## Benchmark Quickview
102
  *L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
 
96
  ---
97
 
98
  # L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
99
+ <p>
100
+ <strong>Paper</strong>
101
+ <a href="https://arxiv.org/abs/2410.02115">
102
+ <img src="https://img.shields.io/badge/arXiv-2410.02115-b31b1b.svg?style=plastic" style="vertical-align: middle;">
103
+ </a>
104
+ &nbsp; <strong>Github</strong>
105
+ <a href="https://github.com/ZetangForward/L-CITEEVAL">
106
+ <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="Github" width="40" height="40" style="vertical-align: middle;">
107
+ </a>
108
+ &nbsp; <strong>Zhihu</strong>
109
+ <a href="https://zhuanlan.zhihu.com/p/817442176">
110
+ <img src="https://img.shields.io/badge/知乎-0079FF.svg?style=plastic&logo=zhihu&logoColor=white" style="vertical-align: middle;">
111
+ </a>
112
+ </p>
113
 
114
  ## Benchmark Quickview
115
  *L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.