prajjwal1 commited on
Commit
4a6c2e0
1 Parent(s): 4cb5713

added bibtex

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -1,5 +1,19 @@
1
  The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  You can check out:
4
  - `prajjwal1/bert-tiny` (L=2, H=128)
5
  - `prajjwal1/bert-mini` (L=4, H=256)
 
1
  The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
2
 
3
+ If you use the model, please consider citing the paper
4
+ ```
5
+ @misc{bhargava2021generalization,
6
+ title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
7
+ author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
8
+ year={2021},
9
+ eprint={2110.01518},
10
+ archivePrefix={arXiv},
11
+ primaryClass={cs.CL}
12
+ }
13
+ ```
14
+ Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
15
+
16
+
17
  You can check out:
18
  - `prajjwal1/bert-tiny` (L=2, H=128)
19
  - `prajjwal1/bert-mini` (L=4, H=256)