Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
Hoshikuzu commited on
Commit
e3bdaf0
1 Parent(s): ea4dd6d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -9
README.md CHANGED
@@ -26,7 +26,7 @@ configs:
26
  path: data/train-*
27
  ---
28
 
29
- # Dataset Card for TEDtalk-en-ja
30
  ### Dataset Summary
31
 
32
  This corpus is extracted from the JESC, with Japanese-English pairs.
@@ -35,7 +35,7 @@ For more information, see website below!
35
 
36
  JESC is the product of a collaboration between Stanford University, Google Brain, and Rakuten Institute of Technology. It was created by crawling the internet for movie and tv subtitles and aligining their captions. It is one of the largest freely available EN-JA corpus, and covers the poorly represented domain of colloquial language.
37
 
38
- You can download the scripts, tools, and crawlers used to create this dataset on *[Github](https://github.com/rpryzant/JESC)**.
39
  **[You can read the paper here](https://arxiv.org/abs/1710.10639)**.
40
 
41
  ### How to use
@@ -56,13 +56,9 @@ For example:
56
 
57
  ```json
58
  {
59
- 'id': 0,
60
- 'score': 1.2499920129776,
61
- 'translation': {
62
- 'en': 'Such is God’s forgiveness.',
63
- 'ja': 'それは神の赦しの故だ。'
64
- }
65
- }
66
  ```
67
  ### Contents ###
68
  1. A large corpus consisting of 2.8 million sentences.
 
26
  path: data/train-*
27
  ---
28
 
29
+ # Dataset Card for JESC
30
  ### Dataset Summary
31
 
32
  This corpus is extracted from the JESC, with Japanese-English pairs.
 
35
 
36
  JESC is the product of a collaboration between Stanford University, Google Brain, and Rakuten Institute of Technology. It was created by crawling the internet for movie and tv subtitles and aligining their captions. It is one of the largest freely available EN-JA corpus, and covers the poorly represented domain of colloquial language.
37
 
38
+ You can download the scripts, tools, and crawlers used to create this dataset on **[Github](https://github.com/rpryzant/JESC)**.
39
  **[You can read the paper here](https://arxiv.org/abs/1710.10639)**.
40
 
41
  ### How to use
 
56
 
57
  ```json
58
  {
59
+ 'en': "you are back, aren't you, harold?",
60
+ 'ja': 'あなたは戻ったのね、ハロルド?'
61
+ }
 
 
 
 
62
  ```
63
  ### Contents ###
64
  1. A large corpus consisting of 2.8 million sentences.