couplet-gpt2 / README.md
supermy's picture
Update README.md
8aceac5
|
raw
history blame
4.25 kB
---
language: zh
datasets: couplet
inference:
parameters:
max_length: 30
num_return_sequences: 1
do_sample: True
widget:
- text: "燕子归来,问昔日雕梁何处。 -"
example_title: "对联1"
- text: "笑取琴书温旧梦。 -"
example_title: "对联2"
- text: "煦煦春风,吹暖五湖四海。 -"
example_title: "对联3"
---
# 对联
## Model description
对联AI生成,给出上联,生成下联。
## How to use
使用 pipeline 调用模型:
```python
>>> # 调用微调后的模型
>>> senc="燕子归来,问昔日雕梁何处。 -"
>>> model_id="couplet-gpt2-finetuning"
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained(model_id)
>>> model = GPT2LMHeadModel.from_pretrained(model_id)
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
>>> text_generator( senc,max_length=25, do_sample=True)
[{'generated_text': '燕子归来,问昔日雕梁何处。 - 风 儿 吹 醒 , 叹 今 朝 烟 雨 无'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("supermy/couplet")
model = AutoModelForCausalLM.from_pretrained("supermy/couplet")
```
## Training data
此数据集基于couplet-dataset的70w条数据集,在此基础上利用敏感词词库对数据进行了过滤,删除了低俗或敏感的内容,删除后剩余约74w条对联数据。
## 统计信息
```
```
## Training procedure
模型:[GPT2](https://huggingface.co/gpt2)
训练环境:英伟达16G显卡
bpe分词:"vocab_size"=50000
```
[INFO|trainer.py:1608] 2022-11-30 12:51:36,357 >> ***** Running training *****
[INFO|trainer.py:1609] 2022-11-30 12:51:36,357 >> Num examples = 260926
[INFO|trainer.py:1610] 2022-11-30 12:51:36,357 >> Num Epochs = 81
[INFO|trainer.py:1611] 2022-11-30 12:51:36,357 >> Instantaneous batch size per device = 96
[INFO|trainer.py:1612] 2022-11-30 12:51:36,357 >> Total train batch size (w. parallel, distributed & accumulation) = 96
[INFO|trainer.py:1613] 2022-11-30 12:51:36,357 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1614] 2022-11-30 12:51:36,357 >> Total optimization steps = 220158
[INFO|trainer.py:1616] 2022-11-30 12:51:36,358 >> Number of trainable parameters = 124439808
{'loss': 6.1104, 'learning_rate': 4.9888034956712906e-05, 'epoch': 0.18}
{'loss': 5.5855, 'learning_rate': 4.977448014607691e-05, 'epoch': 0.37}
{'loss': 5.3264, 'learning_rate': 4.966092533544091e-05, 'epoch': 0.55}
......
......
......
{'loss': 2.8539, 'learning_rate': 5.677740531799889e-08, 'epoch': 80.94}
{'train_runtime': 146835.0563, 'train_samples_per_second': 143.937, 'train_steps_per_second': 1.499, 'train_loss': 3.1762605669072217, 'epoch': 81.0}
***** train metrics *****
epoch = 81.0
train_loss = 3.1763
train_runtime = 1 day, 16:47:15.05
train_samples = 260926
train_samples_per_second = 143.937
train_steps_per_second = 1.499
12/02/2022 05:38:54 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2929] 2022-12-02 05:38:54,688 >> ***** Running Evaluation *****
[INFO|trainer.py:2931] 2022-12-02 05:38:54,688 >> Num examples = 1350
[INFO|trainer.py:2934] 2022-12-02 05:38:54,688 >> Batch size = 96
100%|██████████| 15/15 [00:03<00:00, 4.20it/s]
[INFO|modelcard.py:449] 2022-12-02 05:38:59,875 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.4447501469723692}]}
***** eval metrics *****
epoch = 81.0
eval_accuracy = 0.4448
eval_loss = 3.2813
eval_runtime = 0:00:03.86
eval_samples = 1350
eval_samples_per_second = 349.505
eval_steps_per_second = 3.883
perplexity = 26.6108
```