supermy commited on
Commit
1369277
1 Parent(s): 822f766

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: zh
3
+ datasets: jinyong
4
+ inference:
5
+ parameters:
6
+ max_length: 108
7
+ num_return_sequences: 1
8
+ do_sample: True
9
+ widget:
10
+ - text: "杨过朗声说道:今番良晤,豪兴不浅,他日江湖相逢,再当杯酒言欢。咱们就此别过。 -"
11
+ example_title: "神雕侠侣"
12
+ - text: "乱世之际,人不如狗。 -"
13
+ example_title: "射雕英雄传"
14
+
15
+
16
+ # 飞雪连天射白鹿,笑书神侠倚碧鸳
17
+
18
+ ## Model description
19
+
20
+ AI生成金庸小说,给出开头续写。
21
+
22
+ ## How to use
23
+ 使用 pipeline 调用模型:
24
+
25
+ ```python
26
+ >>> # 调用微调后的模型
27
+ >>> senc="这些雪花落下来,多么白,多么好看.过几天太阳出来,每一片 雪花都变得无影无踪.到得明年冬天,又有许很多多雪花,只不过已不是 今年这些雪花罢了。"
28
+ >>> model_id="jinyong-gpt2-finetuning"
29
+ >>> from transformers import AutoTokenizer, GPT2LMHeadModel, TextGenerationPipeline
30
+
31
+ >>> tokenizer = AutoTokenizer.from_pretrained(model_id)
32
+ >>> model = GPT2LMHeadModel.from_pretrained(model_id)
33
+ >>> text_generator = TextGenerationPipeline(model, tokenizer)
34
+ >>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
35
+ >>> text_generator( senc,max_length=108, do_sample=True)
36
+ [{'generated_text': '这些雪花落下来,多么白,多么好看.过几天太阳出来,每一片 雪花都变得无影无踪.到得明年冬天,又有许很多多雪花,只不过已不是 今年这些雪花罢了。 反正 老天爷 有眼 , 不知 哪里 是甚么 风 险 ?” 正 说到此处 , 突然 听得 谢逊 啸声 渐近 , 忍不住 张口 惊呼 , 一齐 向他 扑去 , 只听 谢逊 一声 怒吼 , 跟着 左手 用力 拍 出一掌 , 以 掌力 化开 。 众人 吃了一惊 , 同时 从 海 道 中 跃出 , 双双 倒退 。 张翠山和殷素素 对望一眼 , 均想 以 这两 大高手 之力 如何 抵挡 , 以 今日 之力 如何 攻敌 之'}]
37
+ >>>
38
+ ```
39
+ Here is how to use this model to get the features of a given text in PyTorch:
40
+
41
+ ```python
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained("supermy/jinyong-gpt2")
45
+
46
+ model = AutoModelForCausalLM.from_pretrained("supermy/jinyong-gpt2")
47
+ ```
48
+
49
+
50
+
51
+ ## Training data
52
+
53
+ 此数据集基于金庸的【飞雪连天射白鹿,笑书神侠倚碧鸳】小说集训练。
54
+
55
+ ## 统计信息
56
+
57
+ ```
58
+ ```
59
+
60
+ ## Training procedure
61
+
62
+ 基于模型:[GPT2](https://huggingface.co/gpt2)
63
+ 训练环境:英伟达16G显卡
64
+
65
+ bpe分词:"vocab_size"=30000
66
+ ```
67
+ [INFO|trainer.py:1608] 2022-12-02 19:52:59,024 >> ***** Running training *****
68
+ [INFO|trainer.py:1609] 2022-12-02 19:52:59,024 >> Num examples = 9443
69
+ [INFO|trainer.py:1610] 2022-12-02 19:52:59,024 >> Num Epochs = 108
70
+ [INFO|trainer.py:1611] 2022-12-02 19:52:59,024 >> Instantaneous batch size per device = 12
71
+ [INFO|trainer.py:1612] 2022-12-02 19:52:59,024 >> Total train batch size (w. parallel, distributed & accumulation) = 12
72
+ [INFO|trainer.py:1613] 2022-12-02 19:52:59,024 >> Gradient Accumulation steps = 1
73
+ [INFO|trainer.py:1614] 2022-12-02 19:52:59,024 >> Total optimization steps = 84996
74
+ [INFO|trainer.py:1616] 2022-12-02 19:52:59,025 >> Number of trainable parameters = 124439808
75
+
76
+ {'loss': 8.0431, 'learning_rate': 4.970998635229893e-05, 'epoch': 0.64}
77
+ {'loss': 7.4867, 'learning_rate': 4.94158548637583e-05, 'epoch': 1.27}
78
+ {'loss': 7.322, 'learning_rate': 4.912172337521766e-05, 'epoch': 1.91}
79
+ ......
80
+ ......
81
+ ......
82
+ {'loss': 3.8686, 'learning_rate': 9.035719327968376e-07, 'epoch': 106.1}
83
+ {'loss': 3.8685, 'learning_rate': 6.094404442562004e-07, 'epoch': 106.73}
84
+ {'loss': 3.8678, 'learning_rate': 3.1530895571556306e-07, 'epoch': 107.37}
85
+
86
+ {'train_runtime': 71919.9835, 'train_samples_per_second': 14.18, 'train_steps_per_second': 1.182, 'train_loss': 4.661963973798675, 'epoch': 108.0}
87
+ ***** train metrics *****
88
+ epoch = 108.0
89
+ train_loss = 4.662
90
+ train_runtime = 19:58:39.98
91
+ train_samples = 9443
92
+ train_samples_per_second = 14.18
93
+ train_steps_per_second = 1.182
94
+ 12/03/2022 15:51:42 - INFO - __main__ - *** Evaluate ***
95
+ [INFO|trainer.py:2929] 2022-12-03 15:51:42,270 >> ***** Running Evaluation *****
96
+ [INFO|trainer.py:2931] 2022-12-03 15:51:42,270 >> Num examples = 283
97
+ [INFO|trainer.py:2934] 2022-12-03 15:51:42,270 >> Batch size = 12
98
+ 100%|██████████| 24/24 [00:07<00:00, 3.17it/s]
99
+ [INFO|modelcard.py:449] 2022-12-03 15:51:52,077 >> Dropping the following result as it does not have all the necessary fields:
100
+ {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.2100502721055507}]}
101
+ ***** eval metrics *****
102
+ epoch = 108.0
103
+ eval_accuracy = 0.2101
104
+ eval_loss = 6.889
105
+ eval_runtime = 0:00:07.90
106
+ eval_samples = 283
107
+ eval_samples_per_second = 35.79
108
+ eval_steps_per_second = 3.035
109
+ perplexity = 981.4321
110
+ ```