couplet-gpt2 / README.md
supermy's picture
Update README.md
938a7ad
|
raw
history blame
4.13 kB
metadata
language: zh
datasets: couplet
inference:
  parameters:
    max_length: 30
    num_return_sequences: 1
    do_sample: true
widget:
  - text: 燕子归来,问昔日雕梁何处。 -
    example_title: 对联1
  - text: 笑取琴书温旧梦。 -
    example_title: 对联2
  - text: 煦煦春风,吹暖五湖四海。 -
    example_title: 对联3

对联

Model description

对联AI生成,给出上联,生成下联。

How to use

使用 pipeline 调用模型:

>>> # 调用微调后的模型
>>> senc="燕子归来,问昔日雕梁何处。 -"
>>> model_id="couplet-gpt2-finetuning"
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline

>>> tokenizer = BertTokenizer.from_pretrained(model_id)
>>> model = GPT2LMHeadModel.from_pretrained(model_id)
>>> text_generator = TextGenerationPipeline(model, tokenizer)   
>>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
>>> text_generator( senc,max_length=25, do_sample=True)
[{'generated_text': '燕子归来,问昔日雕梁何处。 - 风 儿 吹 醒 , 叹 今 朝 烟 雨 无'}]

Here is how to use this model to get the features of a given text in PyTorch:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("supermy/couplet")
model = AutoModelForCausalLM.from_pretrained("supermy/couplet")

Training data

此数据集基于couplet-dataset的70w条数据集,在此基础上利用敏感词词库对数据进行了过滤,删除了低俗或敏感的内容,删除后剩余约74w条对联数据。

统计信息


Training procedure

模型:GPT2 训练环境:英伟达16G显卡

bpe分词:"vocab_size"=50000

[INFO|trainer.py:1608] 2022-11-29 16:00:16,391 >> ***** Running training *****
[INFO|trainer.py:1609] 2022-11-29 16:00:16,391 >>   Num examples = 249327
[INFO|trainer.py:1610] 2022-11-29 16:00:16,391 >>   Num Epochs = 38
[INFO|trainer.py:1611] 2022-11-29 16:00:16,391 >>   Instantaneous batch size per device = 96
[INFO|trainer.py:1612] 2022-11-29 16:00:16,391 >>   Total train batch size (w. parallel, distributed & accumulation) = 96
[INFO|trainer.py:1613] 2022-11-29 16:00:16,391 >>   Gradient Accumulation steps = 1
[INFO|trainer.py:1614] 2022-11-29 16:00:16,391 >>   Total optimization steps = 98724
[INFO|trainer.py:1616] 2022-11-29 16:00:16,392 >>   Number of trainable parameters = 124439808

{'loss': 6.4109, 'learning_rate': 4.975031400672582e-05, 'epoch': 0.19}
{'loss': 5.8476, 'learning_rate': 4.9497082776224627e-05, 'epoch': 0.38}
......
......
......
{'loss': 3.4331, 'learning_rate': 1.3573193954864066e-07, 'epoch': 37.91}
{'train_runtime': 65776.233, 'train_samples_per_second': 144.04, 'train_steps_per_second': 1.501, 'train_loss': 3.74187503763847, 'epoch': 38.0}
***** train metrics *****
  epoch                    =        38.0
  train_loss               =      3.7419
  train_runtime            = 18:16:16.23
  train_samples            =      249327
  train_samples_per_second =      144.04
  train_steps_per_second   =       1.501
11/30/2022 10:16:35 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2929] 2022-11-30 10:16:35,902 >> ***** Running Evaluation *****
[INFO|trainer.py:2931] 2022-11-30 10:16:35,902 >>   Num examples = 1290
[INFO|trainer.py:2934] 2022-11-30 10:16:35,902 >>   Batch size = 96
100%|██████████| 14/14 [00:03<00:00,  4.13it/s]
[INFO|modelcard.py:449] 2022-11-30 10:16:40,821 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.39426602682416634}]}
***** eval metrics *****
  epoch                   =       38.0
  eval_accuracy           =     0.3943
  eval_loss               =      3.546
  eval_runtime            = 0:00:03.67
  eval_samples            =       1290
  eval_samples_per_second =    351.199
  eval_steps_per_second   =      3.811
  perplexity              =    34.6733