oluwatosin adewumi commited on
Commit
60a5b1e
1 Parent(s): 35f4d9d

README remove inference widget

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -10,16 +10,23 @@ datasets:
10
  - PCL
11
  metrics:
12
  - F1
 
13
  ---
14
 
15
  ## T5Base-PCL
16
  This is a fine-tuned model of T5 (base) on the patronizing and condenscending language (PCL) dataset by Pérez-Almendros et al (2020) used for Task 4 competition of SemEval-2022.
17
- It is intended to be used as a classification model for identifying PCL.
18
 
19
  The dataset it's trained on is limited in scope, as it covers only some news texts covering about 20 English-speaking countries.
20
  The macro F1 score achieved on the test set, based on the official evaluation, is 0.5452.
21
  More information about the original pre-trained model can be found [here](https://huggingface.co/t5-base)
22
 
 
 
 
 
 
 
23
  ### How to use
24
 
25
  ```python
@@ -28,3 +35,7 @@ import torch
28
  tokenizer = T5Tokenizer.from_pretrained("tosin/pcl_22")
29
  model = T5ForConditionalGeneration.from_pretrained("tosin/pcl_22")
30
  tokenizer.pad_token = tokenizer.eos_token
 
 
 
 
 
10
  - PCL
11
  metrics:
12
  - F1
13
+ inference: false
14
  ---
15
 
16
  ## T5Base-PCL
17
  This is a fine-tuned model of T5 (base) on the patronizing and condenscending language (PCL) dataset by Pérez-Almendros et al (2020) used for Task 4 competition of SemEval-2022.
18
+ It is intended to be used as a classification model for identifying PCL (0 - neg; 1 - pos). The task prefix we used for the T5 model is 'classification: '.
19
 
20
  The dataset it's trained on is limited in scope, as it covers only some news texts covering about 20 English-speaking countries.
21
  The macro F1 score achieved on the test set, based on the official evaluation, is 0.5452.
22
  More information about the original pre-trained model can be found [here](https://huggingface.co/t5-base)
23
 
24
+ * Classification examples:
25
+ |Prediction | Input |
26
+ |---------|------------|
27
+ |0 | "selective kindness : in europe , some refugees are more equal than others" |
28
+ |1 | he said their efforts should not stop only at creating many graduates but also extended to students from poor families so that they could break away from the cycle of poverty |
29
+
30
  ### How to use
31
 
32
  ```python
 
35
  tokenizer = T5Tokenizer.from_pretrained("tosin/pcl_22")
36
  model = T5ForConditionalGeneration.from_pretrained("tosin/pcl_22")
37
  tokenizer.pad_token = tokenizer.eos_token
38
+ input_ids = tokenizer("he said their efforts should not stop only at creating many graduates but also extended to students from poor families so that they could break away from the cycle of poverty", padding=True, truncation=True, return_tensors='pt').input_ids
39
+ outputs = model.generate(input_ids)
40
+ pred = tokenizer.decode(outputs[0], skip_special_tokens=True)
41
+ print(pred)