uygarkurt commited on
Commit
e457e7c
1 Parent(s): be00850

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -1
README.md CHANGED
@@ -17,4 +17,59 @@ tags:
17
  </div>
18
  <div align="center">
19
  <p>Liked our work? give us a ⭐ on GitHub!</p>
20
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  </div>
18
  <div align="center">
19
  <p>Liked our work? give us a ⭐ on GitHub!</p>
20
+ </div>
21
+
22
+ You can find the BERT model used in the paper Transformer Based Punctuation Restoration for Turkish. Aim of this work is correctly place pre-decided punctuation marks in a given text. We present three pre-trained transformer models to predict **period(.)**, **comma(,)** and **question(?)** marks for the Turkish language.
23
+
24
+ ## Usage <a class="anchor" id="usage"></a>
25
+
26
+ ### Inference <a class="anchor" id="inference"></a>
27
+ Recommended usage is via HuggingFace. You can run an inference using the pre-trained BERT model with the following code:
28
+ ```
29
+ from transformers import pipeline
30
+
31
+ pipe = pipeline(task="token-classification", model="uygarkurt/bert-restore-punctuation-turkish")
32
+
33
+ sample_text = "Türkiye toprakları üzerindeki ilk yerleşmeler Yontma Taş Devri'nde başlar Doğu Trakya'da Traklar olmak üzere Hititler Frigler Lidyalılar ve Dor istilası sonucu Yunanistan'dan kaçan Akalar tarafından kurulan İyon medeniyeti gibi çeşitli eski Anadolu medeniyetlerinin ardından Makedonya kralı Büyük İskender'in egemenliğiyle ve fetihleriyle birlikte Helenistik Dönem başladı"
34
+
35
+ out = pipe(sample_text)
36
+ ```
37
+
38
+ To use a different pre-trained model you can just replace the `model` argument with one of the other [available models](#models) we provided.
39
+
40
+ ### Training <a class="anchor" id="train"></a>
41
+ For training you will need `transformers`, `datasets`. You can install the versions we used with the following commands: `pip3 install transformers==4.25.1`
42
+
43
+ To train a model run `python main.py`. This will train the BERT model by default on given dataset on the path specified by the `train_path`, `val_path` and `test_path` variables located at the `main.py`.
44
+
45
+ Trained model will be saved under `./model_save` directory.
46
+
47
+ If you want to train with a model other then BERT change the arguments of `.from_pretrained()` accordingly.
48
+
49
+ ## Data <a class="anchor" id="data"></a>
50
+ Dataset is provided in `data/` directory as train, validation and test splits.
51
+
52
+ Dataset can be summarized as below:
53
+
54
+ | Split | Total | Period (.) | Comma (,) | Question (?) |
55
+ |:-----------:|:-------:|:----------:|:---------:|:------------:|
56
+ | Train | 1471806 | 124817 | 98194 | 9816 |
57
+ | Validation | 180326 | 15306 | 11980 | 1199 |
58
+ | Test | 182487 | 15524 | 12242 | 1255 |
59
+
60
+ ## Available Models <a class="anchor" id="models"></a>
61
+ We experimented with BERT, ELECTRA and ConvBERT. Pre-trained models can be accessed via Huggingface.
62
+
63
+ BERT: https://huggingface.co/uygarkurt/bert-restore-punctuation-turkish \
64
+ ELECTRA: https://huggingface.co/uygarkurt/electra-restore-punctuation-turkish \
65
+ ConvBERT: https://huggingface.co/uygarkurt/convbert-restore-punctuation-turkish
66
+
67
+ ## Results <a class="results" id="results"></a>
68
+ `Precision` and `Recall` and `F1` scores for each model and punctuation mark are summarized below.
69
+
70
+ | Model | | PERIOD | | | COMMA | | | QUESTION | | | OVERALL | |
71
+ |:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
72
+ |Score Type| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
73
+ | BERT | 0.972602 | 0.947504 | 0.959952 | 0.576145 | 0.700010 | 0.632066 | 0.927642 | 0.911342 | 0.919420 | 0.825506 | 0.852952 | 0.837146 |
74
+ | ELECTRA | 0.972602 | 0.948689 | 0.960497 | 0.576800 | 0.710208 | 0.636590 | 0.920325 | 0.921074 | 0.920699 | 0.823242 | 0.859990 | 0.839262 |
75
+ | ConvBERT | 0.972731 | 0.946791 | 0.959585 | 0.576964 | 0.708124 | 0.635851 | 0.922764 | 0.913849 | 0.918285 | 0.824153 | 0.856254 | 0.837907 |