SaiedAlshahrani commited on
Commit
ddf6361
1 Parent(s): 2563ba7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -8
README.md CHANGED
@@ -2,13 +2,14 @@
2
  tags:
3
  - generated_from_trainer
4
  model-index:
5
- - name: arwiki_mlm
6
  results: []
7
  metrics:
8
  - perplexity
9
  license: mit
10
  datasets:
11
- - SaiedAlshahrani/Arabic_Wikipedia_20230101
 
12
  language:
13
  - ar
14
  library_name: transformers
@@ -20,27 +21,52 @@ widget:
20
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
21
  should probably proofread and complete it, then remove this comment. -->
22
 
23
- # arwiki_mlm (arRoBERTa)
24
 
25
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
 
26
  It achieves the following results on the evaluation set:
27
 
28
- - Pseudo-Perplexity:
 
29
 
30
  ## Model description
31
 
32
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Intended uses & limitations
35
 
36
- More information needed
 
37
 
38
  ## Training and evaluation data
39
 
40
- More information needed
 
41
 
42
  ## Training procedure
43
 
 
 
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
@@ -69,6 +95,13 @@ The following hyperparameters were used during training:
69
 
70
 
71
 
 
 
 
 
 
 
 
72
  ### Framework versions
73
 
74
  - Datasets 2.9.0
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
5
+ - name: arRoBERTa
6
  results: []
7
  metrics:
8
  - perplexity
9
  license: mit
10
  datasets:
11
+ - SaiedAlshahrani/Arabic_Wikipedia_20230101_bots
12
+ - SaiedAlshahrani/MASD
13
  language:
14
  - ar
15
  library_name: transformers
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
22
  should probably proofread and complete it, then remove this comment. -->
23
 
24
+ # Arabic Wikipedia (arRoBERTa<sub>BASE</sub>)
25
 
26
+ This arRoBERTa<sub>BASE</sub> model has been trained *from scratch* on the Arabic Wikipedia articles, downloaded on the 1st of January 2023, processed using
27
+ `Gensim` Python library, preprocessed using `tr` Linux/Unix utility and `CAMeLTools` Python toolkit for Arabic NLP, and hosted here at [SaiedAlshahrani/Arabic_Wikipedia_20230101_bots](https://huggingface.co/datasets/SaiedAlshahrani/Arabic_Wikipedia_20230101_bots).
28
  It achieves the following results on the evaluation set:
29
 
30
+ - Pseudo-Perplexity: 23.70
31
+
32
 
33
  ## Model description
34
 
35
+ We trained this Arabic Wikipedia Masked Language Model (arRoBERTa<sub>BASE</sub>) to evaluate its performance using the Fill-Mask evaluation task and the Masked Arab States Dataset ([MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)) dataset and measure the *impact* of **template-based translation** on the Egyptian Arabic Wikipedia edition.
36
+
37
+ For more details about the experiment, please **read** and **cite** our paper:
38
+
39
+ ```bash
40
+ @inproceedings{alshahrani-etal-2023-implications,
41
+ title = "{{Performance Implications of Using Unrepresentative Corpora in Arabic Natural Language Processing}}",
42
+ author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna",
43
+ booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)",
44
+ month = dec,
45
+ year = "2023",
46
+ address = "Singapore (Hybrid)",
47
+ publisher = "Association for Computational Linguistics",
48
+ url = "https://webspace.clarkson.edu/~alshahsf/unrepresentative_corpora.pdf",
49
+ doi = "#################",
50
+ pages = "###--###",
51
+ abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
52
+ }
53
+ ```
54
+
55
 
56
  ## Intended uses & limitations
57
 
58
+ We do **not** recommend using this model because it was trained *only* on the Arabic Wikipedia articles, <u>unless</u> you fine-tune the model on a large, organic, and representative Arabic dataset.
59
+
60
 
61
  ## Training and evaluation data
62
 
63
+ We have trained this model on the Arabic Wikipedia articles ([SaiedAlshahrani/Arabic_Wikipedia_20230101_bots](https://huggingface.co/datasets/SaiedAlshahrani/Arabic_Wikipedia_20230101_bots)) without using any validation or evaluation data (only training data) due to a lack of computational power.
64
+
65
 
66
  ## Training procedure
67
 
68
+ We have trained this model using the Paperspace GPU-Cloud service. We used a machine with 8 CPUs, 45GB RAM, and A6000 GPU with 48GB RAM.
69
+
70
  ### Training hyperparameters
71
 
72
  The following hyperparameters were used during training:
 
95
 
96
 
97
 
98
+ ### Evaluation results
99
+ This arRoBERTa<sub>BASE</sub> model has been evaluated on the Masked Arab States Dataset ([SaiedAlshahrani/MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)).
100
+ | K=10 | K=50 | K=100 |
101
+ |:----:|:-----:|:----:|
102
+ | 43.12%| 45% | 50.62% |
103
+
104
+
105
  ### Framework versions
106
 
107
  - Datasets 2.9.0