Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: zh
|
3 |
+
datasets: couplet
|
4 |
+
inference:
|
5 |
+
parameters:
|
6 |
+
max_length: 68
|
7 |
+
num_return_sequences: 1
|
8 |
+
do_sample: True
|
9 |
+
widget:
|
10 |
+
- text: "燕子归来,问昔日雕梁何处。 -"
|
11 |
+
example_title: "对联1"
|
12 |
+
- text: "笑取琴书温旧梦。 -"
|
13 |
+
example_title: "对联2"
|
14 |
+
- text: "煦煦春风,吹暖五湖四海。 -"
|
15 |
+
example_title: "对联3"
|
16 |
+
---
|
17 |
+
|
18 |
+
|
19 |
+
# 对联
|
20 |
+
|
21 |
+
## Model description
|
22 |
+
|
23 |
+
对联AI生成,给出上联,生成下联。
|
24 |
+
|
25 |
+
## How to use
|
26 |
+
使用 pipeline 调用模型:
|
27 |
+
|
28 |
+
```python
|
29 |
+
>>> # 调用微调后的模型
|
30 |
+
>>> senc="燕子归来,问昔日雕梁何处。 -"
|
31 |
+
>>> model_id="couplet-gpt2-finetuning"
|
32 |
+
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
|
33 |
+
|
34 |
+
>>> tokenizer = BertTokenizer.from_pretrained(model_id)
|
35 |
+
>>> model = GPT2LMHeadModel.from_pretrained(model_id)
|
36 |
+
>>> text_generator = TextGenerationPipeline(model, tokenizer)
|
37 |
+
>>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
|
38 |
+
>>> text_generator( senc,max_length=25, do_sample=True)
|
39 |
+
[{'generated_text': '燕子归来,问昔日雕梁何处。 - 风 儿 吹 醒 , 叹 今 朝 烟 雨 无'}]
|
40 |
+
```
|
41 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
42 |
+
|
43 |
+
```python
|
44 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained("supermy/couplet")
|
46 |
+
model = AutoModelForCausalLM.from_pretrained("supermy/couplet")
|
47 |
+
```
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
## Training data
|
52 |
+
|
53 |
+
此数据集基于couplet-dataset的70w条数据集,在此基础上利用敏感词词库对数据进行了过滤,删除了低俗或敏感的内容,删除后剩余约74w条对联数据。
|
54 |
+
|
55 |
+
## 统计信息
|
56 |
+
|
57 |
+
```
|
58 |
+
|
59 |
+
```
|
60 |
+
|
61 |
+
## Training procedure
|
62 |
+
|
63 |
+
模型:[Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en)
|
64 |
+
训练环境:英伟达16G显卡
|
65 |
+
|
66 |
+
mt5分词:"vocab_size"=50000
|
67 |
+
```
|
68 |
+
[INFO|trainer.py:1634] 2022-12-13 06:27:25,113 >> ***** Running training *****
|
69 |
+
[INFO|trainer.py:1635] 2022-12-13 06:27:25,113 >> Num examples = 741096
|
70 |
+
[INFO|trainer.py:1636] 2022-12-13 06:27:25,113 >> Num Epochs = 36
|
71 |
+
[INFO|trainer.py:1637] 2022-12-13 06:27:25,113 >> Instantaneous batch size per device = 256
|
72 |
+
[INFO|trainer.py:1638] 2022-12-13 06:27:25,113 >> Total train batch size (w. parallel, distributed & accumulation) = 256
|
73 |
+
[INFO|trainer.py:1639] 2022-12-13 06:27:25,114 >> Gradient Accumulation steps = 1
|
74 |
+
[INFO|trainer.py:1640] 2022-12-13 06:27:25,114 >> Total optimization steps = 104220
|
75 |
+
[INFO|trainer.py:1642] 2022-12-13 06:27:25,114 >> Number of trainable parameters = 77419008
|
76 |
+
[INFO|trainer.py:1663] 2022-12-13 06:27:25,115 >> Continuing training from checkpoint, will skip to saved global_step
|
77 |
+
[INFO|trainer.py:1664] 2022-12-13 06:27:25,115 >> Continuing training from epoch 2
|
78 |
+
[INFO|trainer.py:1665] 2022-12-13 06:27:25,115 >> Continuing training from global step 7500
|
79 |
+
|
80 |
+
{'loss': 5.5206, 'learning_rate': 4.616340433697947e-05, 'epoch': 2.76}
|
81 |
+
{'loss': 5.4737, 'learning_rate': 4.5924006908462866e-05, 'epoch': 2.94}
|
82 |
+
{'loss': 5.382, 'learning_rate': 4.5684609479946274e-05, 'epoch': 3.11}
|
83 |
+
{'loss': 5.34, 'learning_rate': 4.544473229706391e-05, 'epoch': 3.28}
|
84 |
+
{'loss': 5.3154, 'learning_rate': 4.520485511418154e-05, 'epoch': 3.45}
|
85 |
+
......
|
86 |
+
......
|
87 |
+
......
|
88 |
+
{'loss': 3.3099, 'learning_rate': 3.650930723469584e-07, 'epoch': 35.75}
|
89 |
+
{'loss': 3.3077, 'learning_rate': 1.2521588946459413e-07, 'epoch': 35.92}
|
90 |
+
{'train_runtime': 41498.9079, 'train_samples_per_second': 642.895, 'train_steps_per_second': 2.511, 'train_loss': 3.675059686432734, 'epoch': 36.0}
|
91 |
+
***** train metrics *****
|
92 |
+
epoch = 36.0
|
93 |
+
train_loss = 3.6751
|
94 |
+
train_runtime = 11:31:38.90
|
95 |
+
train_samples = 741096
|
96 |
+
train_samples_per_second = 642.895
|
97 |
+
train_steps_per_second = 2.511
|
98 |
+
12/13/2022 17:59:05 - INFO - __main__ - *** Evaluate ***
|
99 |
+
[INFO|trainer.py:2944] 2022-12-13 17:59:05,707 >> ***** Running Evaluation *****
|
100 |
+
[INFO|trainer.py:2946] 2022-12-13 17:59:05,708 >> Num examples = 3834
|
101 |
+
[INFO|trainer.py:2949] 2022-12-13 17:59:05,708 >> Batch size = 256
|
102 |
+
100%|██████████| 15/15 [03:25<00:00, 13.69s/it]
|
103 |
+
[INFO|modelcard.py:449] 2022-12-13 18:02:46,984 >> Dropping the following result as it does not have all the necessary fields:
|
104 |
+
{'task': {'name': 'Translation', 'type': 'translation'}, 'metrics': [{'name': 'Bleu', 'type': 'bleu', 'value': 3.7831}]}
|
105 |
+
***** eval metrics *****
|
106 |
+
epoch = 36.0
|
107 |
+
eval_bleu = 3.7831
|
108 |
+
eval_gen_len = 63.0
|
109 |
+
eval_loss = 4.5035
|
110 |
+
eval_runtime = 0:03:40.09
|
111 |
+
eval_samples = 3834
|
112 |
+
eval_samples_per_second = 17.419
|
113 |
+
eval_steps_per_second = 0.068
|
114 |
+
|
115 |
+
```
|