File size: 3,987 Bytes
b06cfe7
 
 
 
 
eea62fd
b06cfe7
 
 
 
 
 
 
06843d5
b06cfe7
42e47b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b06cfe7
 
06843d5
b06cfe7
7e34597
06843d5
 
 
 
 
 
 
0792a33
06843d5
 
 
 
b06cfe7
 
43a4727
 
b06cfe7
 
 
 
06843d5
 
b06cfe7
 
 
06843d5
b06cfe7
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
tags:
- generated_from_trainer
model-index:
- name: drug-stance-bert
  results: [1, 0, 2]
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# drug-stance-bert

This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on [COVID-CQ](https://github.com/eceveco/COVID-CQ), a dataset that contains 3-label annotated opinions (negative, neutral, and positive) of the tweet initiators regarding the use of Chloroquine or Hydroxychloroquine for the treatment or prevention of the coronavirus.

## Intended uses & limitations
Predict opinions (negative, neutral, and positive) of tweet initiators regarding the use of a drug for the treatment or prevention of the coronavirus. Note that having multiple drug names with different stances in a single tweet can confuse the model.

## Inference & understanding
We followed COVID-CQ to use the following label representation:
- 0 -> None/Neutral; 
- 1 -> Against; 
- 2 -> Favor

Try these examples: 
- The gov's killing people by banning Ivm
- Great news cheers everybody:) ivermectin proven to not work by rct lol

## Tutorial
See our Github repo for [inference scripts](https://github.com/ningkko/COVID-drug/blob/main/stance_detection/inference.ipynb)

## Model description

"We developed two COVID-drug-stance RoBERTa-base models by fine-tuning a pre-trained Twitter-specific stance detection model on a stance data set called COVID-CQ. The data were divided into training-dev-test validation datasets with a 70:10:20 ratio. Model I (COVID-drug-stance-BERT) was trained on the original tweet data, and Model II (COVID-drug-stance-BERT-masked) was trained on tweets with drug names masked as “[mask]” for model generalizability on different drugs. The two models had similar performance on the COVID-19 validation set: COVID-drug-stance-BERT had an accuracy of 86.88%, and the masked model had an accuracy of 86.67%. The two models were then evaluated by predicting tweet initiators’ attitudes towards the drug mentioned in each tweet using randomly selected test sets (100 tweets) of each drug (Hydroxychloquine, Ivermectin, Molnupiravir, Remdesivir). As suggested by the evaluation in Table 2, Model I had better performance and was therefore used in this study". 


| **Drug**               | **Model I: Original Tweet** |             |              | **Model II: Drug Names Masked** |             |              |
|------------------------|:---------------------------:|:-----------:|:------------:|:-------------------------------:|:-----------:|:------------:|
|                        |        **Precision**        |  **Recall** | **F1-Score** |          **Precision**          |  **Recall** | **F1-Score** |
| **Hydroxychloroquine** |             0.93            |     0.92    |   **0.92**   |               0.84              |     0.83    |      0.83    |
| **Ivermectin**         |             0.92            |     0.91    |   **0.91**   |               0.72              |     0.68    |      0.68    |
| **Molnupiravir**       |             0.89            |     0.89    |   **0.89**   |               0.78              |     0.77    |      0.77    |
| **Remdesivir**         |             0.82            |     0.79    |   **0.79**   |               0.70              |     0.66    |      0.66    |

The model uploaded here is Model I.

## Training and evaluation data
COVID-CQ

## Training procedure
See Github

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0

### Framework versions

- Transformers 4.11.0
- Pytorch 1.8.1+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3