Edit model card

NOTE: This repository is now superseded by https://huggingface.co/bertin-project/bertin-roberta-base-spanish. This model corresponds to the beta version of the model using stepwise over sampling trained for 200k steps with 128 sequence lengths. Version 1 is now available and should be used instead.

BERTIN

BERTIN is a series of BERT-based models for Spanish. This one is a RoBERTa-large model trained from scratch on the Spanish portion of mC4 using Flax, including training scripts.

This is part of the Flax/Jax Community Week, organised by HuggingFace and TPU usage sponsored by Google.

Spanish mC4

The Spanish portion of mC4 containes about 416 million records and 235 billion words.

$ zcat c4/multilingual/c4-es*.tfrecord*.json.gz | wc -l
416057992
$ zcat c4/multilingual/c4-es*.tfrecord-*.json.gz | jq -r '.text | split(" ") | length' | paste -s -d+ - | bc
235303687795

Team members

Useful links

Downloads last month
30
Safetensors
Model size
125M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.