Update README.md
Browse files
README.md
CHANGED
@@ -26,24 +26,21 @@ widget:
|
|
26 |
使用 pipeline 调用模型:
|
27 |
|
28 |
```python
|
29 |
-
>>>
|
30 |
-
>>>
|
31 |
-
>>>
|
32 |
-
>>> from transformers import
|
33 |
-
|
34 |
-
>>>
|
35 |
-
|
36 |
-
|
37 |
-
>>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
|
38 |
-
>>> text_generator( senc,max_length=25, do_sample=True)
|
39 |
-
[{'generated_text': '燕子归来,问昔日雕梁何处。 - 风 儿 吹 醒 , 叹 今 朝 烟 雨 无'}]
|
40 |
```
|
41 |
Here is how to use this model to get the features of a given text in PyTorch:
|
42 |
|
43 |
```python
|
44 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
45 |
-
tokenizer = AutoTokenizer.from_pretrained("supermy/couplet")
|
46 |
-
model = AutoModelForCausalLM.from_pretrained("supermy/couplet")
|
47 |
```
|
48 |
|
49 |
|
|
|
26 |
使用 pipeline 调用模型:
|
27 |
|
28 |
```python
|
29 |
+
>>> task_prefix = ""
|
30 |
+
>>> sentence = task_prefix+"国色天香,姹紫嫣红,碧水青云欣共赏"
|
31 |
+
>>> model_output_dir='couplet-hel-mt5-finetuning/'
|
32 |
+
>>> from transformers import pipeline
|
33 |
+
>>> translation = pipeline("translation", model=model_output_dir)
|
34 |
+
>>> print(translation(sentence,max_length=28))
|
35 |
+
[{'translation_text': '月圆花好,良辰美景,良辰美景喜相逢'}]
|
36 |
+
|
|
|
|
|
|
|
37 |
```
|
38 |
Here is how to use this model to get the features of a given text in PyTorch:
|
39 |
|
40 |
```python
|
41 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained("supermy/couplet-helsinki")
|
43 |
+
model = AutoModelForCausalLM.from_pretrained("supermy/couplet-helsinki")
|
44 |
```
|
45 |
|
46 |
|