kiddothe2b commited on
Commit
f01568f
1 Parent(s): e8306e6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ pipeline_tag: fill-mask
4
+ arxiv: 2210.05529
5
+ language: en
6
+ thumbnail: https://github.com/coastalcph/hierarchical-transformers/raw/main/data/figures/hat_encoder.png
7
+ tags:
8
+ - long-documents
9
+ datasets:
10
+ - c4
11
+ model-index:
12
+ - name: kiddothe2b/hierarchical-transformer-base-4096-v2
13
+ results: []
14
+ ---
15
+
16
+ # Hierarchical Attention Transformer (HAT) / hierarchical-transformer-base-4096-v2
17
+
18
+ ## Disclaimer 🚧 ⚠️
19
+ This is an experimental version of HAT, trying to make HAT a native part of Transformers library. Please use ONLY [kiddothe2b/hierarchical-transformer-base-4096](https://huggingface.co/kiddothe2b/hierarchical-transformer-base-4096) for the moment.
20
+
21
+ ## Model description
22
+
23
+ This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529).
24
+
25
+ The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), and continued pre-trained for MLM in long sequences following the paradigm of Longformer released by Beltagy et al. (2020). It supports sequences of length up to 4,096.
26
+
27
+ HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences.
28
+
29
+
30
+ ## Citing
31
+
32
+ If you use HAT in your research, please cite:
33
+
34
+ [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
35
+
36
+ ```
37
+ @misc{chalkidis-etal-2022-hat,
38
+ url = {https://arxiv.org/abs/2210.05529},
39
+ author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
40
+ title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
41
+ publisher = {arXiv},
42
+ year = {2022},
43
+ }
44
+ ```
45
+
46
+