hhou435 commited on
Commit
6b014f1
1 Parent(s): 14066b6
Files changed (6) hide show
  1. README.md +185 -0
  2. config.json +25 -0
  3. pytorch_model.bin +3 -0
  4. special_tokens_map.json +1 -0
  5. tokenizer_config.json +1 -0
  6. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: zh
3
+ datasets: CLUECorpusSmall
4
+ widget:
5
+ - text: "北京是[MASK]国的首都。"
6
+
7
+
8
+
9
+ ---
10
+ # Chinese Whole Word Masking RoBERTa Miniatures
11
+
12
+ ## Model description
13
+
14
+ This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
15
+
16
+ [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
17
+
18
+ You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
19
+
20
+ | | Link |
21
+ | -------- | :-----------------------: |
22
+ | **Tiny** | [**2/128 (Tiny)**][2_128] |
23
+ | **Mini** | [**4/256 (Mini)**][4_256] |
24
+ | **Small** | [**4/512 (Small)**][4_512] |
25
+ | **Medium** | [**8/512 (Medium)**][8_512] |
26
+ | **Base** | [**12/768 (Base)**][12_768] |
27
+ | **Large** | [**24/1024 (Large)**][24_1024] |
28
+
29
+ Here are scores on the devlopment set of six Chinese tasks:
30
+
31
+ | Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
32
+ | ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
33
+ | RoBERTa-Tiny-WWM | 72.1 | 82.8 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 |
34
+ | RoBERTa-Mini-WWM | 76.1 | 84.9 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 |
35
+ | RoBERTa-Small-WWM | 77.3 | 86.8 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 |
36
+ | RoBERTa-Medium-WWM | 78.4 | 88.2 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 |
37
+ | RoBERTa-Base-WWM | 80.1 | 90.0 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 |
38
+ | RoBERTa-Large-WWM | 81.0 | 90.4 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 |
39
+
40
+ For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
41
+
42
+ - epochs: 3, 5, 8
43
+ - batch sizes: 32, 64
44
+ - learning rates: 3e-5, 1e-4, 3e-4
45
+
46
+ ## How to use
47
+
48
+ You can use this model directly with a pipeline for masked language modeling:
49
+
50
+ ```python
51
+ >>> from transformers import pipeline
52
+ >>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall')
53
+ >>> unmasker("北京是[MASK]国的首都。")
54
+ [
55
+ {'score': 0.294228732585907,
56
+ 'token': 704,
57
+ 'token_str': '中',
58
+ 'sequence': '北 京 是 中 国 的 首 都 。'},
59
+ {'score': 0.19691626727581024,
60
+ 'token': 1266,
61
+ 'token_str': '北',
62
+ 'sequence': '北 京 是 北 国 的 首 都 。'},
63
+ {'score': 0.1070084273815155,
64
+ 'token': 7506,
65
+ 'token_str': '韩',
66
+ 'sequence': '北 京 是 韩 国 的 首 都 。'},
67
+ {'score': 0.031527262181043625,
68
+ 'token': 2769,
69
+ 'token_str': '我',
70
+ 'sequence': '北 京 是 我 国 的 首 都 。'},
71
+ {'score': 0.023054633289575577,
72
+ 'token': 1298,
73
+ 'token_str': '南',
74
+ 'sequence': '北 京 是 南 国 的 首 都 。'}
75
+ ]
76
+
77
+
78
+
79
+ ```
80
+
81
+ Here is how to use this model to get the features of a given text in PyTorch:
82
+
83
+ ```python
84
+ from transformers import BertTokenizer, BertModel
85
+ tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
86
+ model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
87
+ text = "用你喜欢的任何文本替换我。"
88
+ encoded_input = tokenizer(text, return_tensors='pt')
89
+ output = model(**encoded_input)
90
+ ```
91
+
92
+ and in TensorFlow:
93
+
94
+ ```python
95
+ from transformers import BertTokenizer, TFBertModel
96
+ tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
97
+ model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
98
+ text = "用你喜欢的任何文本替换我。"
99
+ encoded_input = tokenizer(text, return_tensors='tf')
100
+ output = model(encoded_input)
101
+ ```
102
+
103
+ ## Training data
104
+
105
+ [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
106
+
107
+ ## Training procedure
108
+
109
+ Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
110
+
111
+ [jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
112
+
113
+ Taking the case of Whole Word Masking RoBERTa-Medium
114
+
115
+ Stage1:
116
+
117
+ ```
118
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
119
+ --vocab_path models/google_zh_vocab.txt \
120
+ --dataset_path cluecorpussmall_seq128_dataset.pt \
121
+ --processes_num 32 --seq_length 128 \
122
+ --dynamic_masking --data_processor mlm
123
+ ```
124
+
125
+ ```
126
+ python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
127
+ --vocab_path models/google_zh_vocab.txt \
128
+ --config_path models/bert/medium_config.json \
129
+ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \
130
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
131
+ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
132
+ --learning_rate 1e-4 --batch_size 64 \
133
+ --whole_word_masking \
134
+ --data_processor mlm --target mlm
135
+ ```
136
+
137
+ Stage2:
138
+
139
+ ```
140
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
141
+ --vocab_path models/google_zh_vocab.txt \
142
+ --dataset_path cluecorpussmall_seq512_dataset.pt \
143
+ --processes_num 32 --seq_length 512 \
144
+ --dynamic_masking --data_processor mlm
145
+ ```
146
+
147
+ ```
148
+ python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
149
+ --vocab_path models/google_zh_vocab.txt \
150
+ --pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \
151
+ --config_path models/bert/medium_config.json \
152
+ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
153
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
154
+ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
155
+ --learning_rate 5e-5 --batch_size 16 \
156
+ --whole_word_masking \
157
+ --data_processor mlm --target mlm
158
+ ```
159
+
160
+ Finally, we convert the pre-trained model into Huggingface's format:
161
+
162
+ ```
163
+ python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
164
+ --output_model_path pytorch_model.bin \
165
+ --layers_num 8 --type mlm
166
+ ```
167
+
168
+ ### BibTeX entry and citation info
169
+
170
+ ```
171
+ @article{zhao2019uer,
172
+ title={UER: An Open-Source Toolkit for Pre-training Models},
173
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
174
+ journal={EMNLP-IJCNLP 2019},
175
+ pages={241},
176
+ year={2019}
177
+ }
178
+ ```
179
+
180
+ [2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall
181
+ [4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall
182
+ [4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall
183
+ [8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall
184
+ [12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall
185
+ [24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.17.0",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 21128
25
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85b015e25d60e9adb5768c5c6607f2af6709da524d5479e48b43bdbec63dcd47
3
+ size 409249095
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": null, "tokenizer_class": "BertTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff