wikipedia_en2ar_mt / README.md
SultanR's picture
Update README.md
20d0b2c verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 12954169685
      num_examples: 3435343
  download_size: 6525186931
  dataset_size: 12954169685
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

A dataset of translated wikipedia en->ar. Translated ~70gb worth of english wikipedia text for pretraining. Native arabic wikipedia is only 20gb. The dataset size you see in HF is compressed.

Translated using nllb-600m-distilled