pcl_22 / README.md
oluwatosin adewumi
README updated again
e87bb77
|
raw
history blame
1.05 kB
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- text classification
- transformers
datasets:
- PCL
metrics:
- F1
widget:
-
---
## T5Base-PCL
This is a fine-tuned model of T5 (base) on the patronizing and condenscending language (PCL) dataset by Pérez-Almendros et al (2020) used for Task 4 competition of SemEval-2022.
It is intended to be used as a classification model for identifying PCL.
The dataset it's trained on is limited in scope, as it covers only some news texts covering about 20 English-speaking countries.
The macro F1 score achieved on the test set, based on the official evaluation, is 0.5452.
More information about the original pre-trained model can be found [here](https://huggingface.co/t5-base)
### How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
tokenizer = T5Tokenizer.from_pretrained("tosin/pcl_22")
model = T5ForConditionalGeneration.from_pretrained("tosin/pcl_22")
tokenizer.pad_token = tokenizer.eos_token