Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
JESC / README.md
Hoshikuzu's picture
Update README.md
963675b verified
metadata
language:
  - en
  - ja
license: cc-by-4.0
task_categories:
  - translation
dataset_info:
  features:
    - name: translation
      struct:
        - name: en
          dtype: string
        - name: ja
          dtype: string
  splits:
    - name: train
      num_bytes: 249255464
      num_examples: 2801388
  download_size: 175157050
  dataset_size: 249255464
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for JESC

Dataset Summary

This corpus is extracted from the JESC, with Japanese-English pairs. For more information, see website below! (https://nlp.stanford.edu/projects/jesc/index_ja.html)

JESC is the product of a collaboration between Stanford University, Google Brain, and Rakuten Institute of Technology. It was created by crawling the internet for movie and tv subtitles and aligining their captions. It is one of the largest freely available EN-JA corpus, and covers the poorly represented domain of colloquial language.

You can download the scripts, tools, and crawlers used to create this dataset on Github. You can read the paper here.

How to use

from datasets import load_dataset
dataset = load_dataset("Hoshikuzu/JESC")

If data loading times are too long and boring, use Streaming.

from datasets import load_dataset
dataset = load_dataset("Hoshikuzu/JESC", streaming=True)

Data Instances

For example:

{
  'en': "you are back, aren't you, harold?",
  'ja': 'あなたは戻ったのね、ハロルド?'
}

Contents

  1. A large corpus consisting of 2.8 million sentences.
  2. Translations of casual language, colloquialisms, expository writing, and narrative discourse. These are domains that are hard to find in JA-EN MT.
  3. Pre-processed data, including tokenized train/dev/test splits.
  4. Code for making your own crawled datasets and tools for manipulating MT data.

Data Splits

Only a train split is provided.

Licensing Information

These data are released under a Creative Commons (CC) license.

Citation Information

@ARTICLE{pryzant_jesc_2018,
   author = {{Pryzant}, R. and {Chung}, Y. and {Jurafsky}, D. and {Britz}, D.},
    title = "{JESC: Japanese-English Subtitle Corpus}",
  journal = {Language Resources and Evaluation Conference (LREC)},
 keywords = {Computer Science - Computation and Language},
     year = 2018
}