milmor commited on
Commit
940dceb
1 Parent(s): d00360d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -6
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  ---
7
 
8
  # t5-small-spanish-nahuatl
9
- Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the task of neural machine tranlation is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consists of ~16,000 and ~7,000 samples respectivly. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. In this work we leverage the T5 text-to-text training strategy to compensate for the lack of data. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
10
 
11
 
12
  ## Model description
@@ -30,15 +30,33 @@ outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
30
  ```
31
 
32
  ## Approach
33
- Since the Axolotl corpus contains misaligments, we just select the best samples (~10,000 samples). We use the [bible-sorpus](https://github.com/christos-c/bible-corpus) (7,821 samples) to compensate the lack of nahuatl data.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
 
36
  ## Evaluation results
37
- The model is evaluated on 505 validation sentences. We report the results using chrf and sacrebleu hugging face metrics:
38
- - Validation loss: 1.31
39
- - BLEU: 6.18
40
- - Chrf: 28.21
 
 
41
 
 
 
42
 
43
  ## References
44
  - Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
 
6
  ---
7
 
8
  # t5-small-spanish-nahuatl
9
+ Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the task of neural machine tranlation is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consists of ~16,000 and ~7,000 samples respectivly. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work we leverage the T5 text-to-text sufix training strategy to compensate the lack of data. We first teach the multilingual model Spanish unsing English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
10
 
11
 
12
  ## Model description
 
30
  ```
31
 
32
  ## Approach
33
+ ### Dataset
34
+ Since the Axolotl corpus contains misaligments, we just select the best samples (~10,000 samples). We also use the [bible-sorpus](https://github.com/christos-c/bible-corpus) (7,821 samples).
35
+
36
+ ### Model and training
37
+ We employ two training-stages using a multilingual T5-small. This model was chosen because it can handle different vocabularies and suffixes. The model is pretrained on different tasks and lenguages (French, Romanian, English, German).
38
+
39
+ ### Training-stage 1 (learning Spanish)
40
+ In training stage 1 we first introduce Spanish to the model. The objective is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118.964 text pairs. We train the model till convergence adding the suffix "Translate Spanish to English: ".
41
+
42
+
43
+ ### Training-stage 2 (learning Nahuatl)
44
+ We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-trianing avoids overfitting end mekes the model more robust.
45
+
46
+ ### Training setup
47
+ We train the models on the same datasets for 660k steps using batch size = 16 adn 2e-5 learning rate.
48
 
49
 
50
  ## Evaluation results
51
+ For a fair comparison, the models are evaluated on the same 505 validation Nahuatl sentences. We report the results using chrf and sacrebleu hugging face metrics:
52
+
53
+ | English-Spanish pretraining | Validation loss | BLEU | Chrf |
54
+ |:----------------------------:|:---------------:|:-----|-------:|
55
+ | False | 1.34 | 6.17 | 26.96 |
56
+ | True | 1.31 | 6.18 | 28.21 |
57
 
58
+ The English-Spanish pretrained model improves BLEU and Chrf.
59
+ , and leads to faster convergence.
60
 
61
  ## References
62
  - Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits