publichealthsurveillance commited on
Commit
3f57ff5
1 Parent(s): d6acb29

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PHS-BERT
2
+
3
+ We present and release [PHS-BERT](https://arxiv.org/abs/2204.04521), a transformer-based pretrained language model (PLM), to identify tasks related to public health surveillance (PHS) on social media. Compared with existing PLMs that are mainly evaluated on limited tasks, PHS-BERT achieved state-of-the-art performance on 25 tested datasets, showing that our PLM is robust and generalizable in the common PHS tasks.
4
+
5
+ ## Usage
6
+ Load the model via [Huggingface's Transformers library](https://github.com/huggingface/transformers])
7
+ ```
8
+ from transformers import AutoTokenizer, AutoModel
9
+ tokenizer = AutoTokenizer.from_pretrained("publichealthsurveillance/PHS-BERT")
10
+ model = AutoModel.from_pretrained("publichealthsurveillance/PHS-BERT")
11
+ ```
12
+
13
+ ## Training Procedure
14
+
15
+ ### Pretraining
16
+ We followed the standard pretraining protocols of BERT and initialized PHS-BERT with weights from BERT during the training phase instead of training from scratch and used the uncased version of the BERT model.
17
+
18
+ PHS-BERT is trained on a corpus of health-related tweets that were crawled via the Twitter API. Focusing on the tasks related to PHS, keywords used to collect pretraining corpus are set to disease, symptom, vaccine, and mental health-related words in English. Retweet tags were deleted from the raw corpus, and URLs and usernames were re- placed with HTTP-URL and @USER, respectively. All emoticons were replaced with their associated meanings.
19
+
20
+ Each sequence of BERT LM inputs is converted to 50,265 vocabulary tokens. Twitter posts are restricted to 200 characters, and during the training and evaluation phase, we used a batch size of 8. Distributed training was performed on a TPU v3-8.
21
+
22
+ ### Fine-tuning
23
+ We used the embedding of the special token [CLS] of the last hidden layer as the final feature of the input text. We adopted the multilayer perceptron (MLP) with the hyperbolic tangent activation function and used Adam optimizer. The models are trained with a one cycle policy at a maximum learning rate of 2e-05 with momentum cycled between 0.85 and 0.95.
24
+
25
+ ## Societal Impact
26
+ We train and release a PLM to accelerate the automatic identification of tasks re- lated to PHS on social media. Our work aims to develop a new computational method for screening users in need of early intervention and is not in- tended to use in clinical settings or as a diagnostic tool.
27
+
28
+ ## BibTex entry and citation info
29
+ ```
30
+ @misc{https://doi.org/10.48550/arxiv.2204.04521,
31
+ doi = {10.48550/ARXIV.2204.04521},
32
+ url = {https://arxiv.org/abs/2204.04521},
33
+ author = {Naseem, Usman and Lee, Byoung Chan and Khushi, Matloob and Kim, Jinman and Dunn, Adam G.},
34
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
35
+ title = {Benchmarking for Public Health Surveillance tasks on Social Media with a Domain-Specific Pretrained Language Model},
36
+ publisher = {arXiv},
37
+ year = {2022},
38
+ copyright = {Creative Commons Attribution 4.0 International}
39
+ }
40
+ ```
41
+