DeepESP commited on
Commit
dba57b5
1 Parent(s): 1af848a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -1,22 +1,21 @@
1
  # GPT2-Spanish
2
-
3
- GPT2-Spanish is a language generation model trained from scratch with 9 gb of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model.
4
 
5
  ## Corpus
6
- This model was trained with a corpus of 9 gb of texts corresponding to 3 gb of Wikipedia articles and 6 gb of books (narrative, short stories, theater, poetry, essays and popularization).
7
 
8
  ## Tokenizer
9
- The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50257 in small model and 50257 in medium model. The inputs are sequences of 1024 consecutive tokens.
10
 
11
  This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
12
 
13
  Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
14
 
15
  ## Training
16
- The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16 gb memory on Google Colab servers.
17
 
18
  ## Authors
19
- The model was trained by Jorge Ortiz Fuentes (Chile) and Alejandro Oñate Latorre (Spain), members of DeepESP, an open source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
20
 
21
  Thanks to the members of the community who collaborated with funding for the initial tests.
22
 
 
1
  # GPT2-Spanish
2
+ GPT2-Spanish is a language generation model trained from scratch with 9GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model.
 
3
 
4
  ## Corpus
5
+ This model was trained with a corpus of 9GB of texts corresponding to 3 GB of Wikipedia articles and 6GB of books (narrative, short stories, theater, poetry, essays, and popularization).
6
 
7
  ## Tokenizer
8
+ The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257 in the small model and 50257 in the medium model. The inputs are sequences of 1024 consecutive tokens.
9
 
10
  This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
11
 
12
  Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
13
 
14
  ## Training
15
+ The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
16
 
17
  ## Authors
18
+ The model was trained by Jorge Ortiz Fuentes (Chile) and Alejandro Oñate Latorre (Spain), members of DeepESP, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
19
 
20
  Thanks to the members of the community who collaborated with funding for the initial tests.
21