Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,68 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
task_categories:
|
4 |
+
- translation
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
- ja
|
8 |
+
---
|
9 |
+
|
10 |
+
# Dataset Card for TEDtalk-en-ja
|
11 |
+
### Dataset Summary
|
12 |
+
|
13 |
+
This corpus is extracted from the JESC, with Japanese-English pairs.
|
14 |
+
For more information, see website below!
|
15 |
+
**[(https://nlp.stanford.edu/projects/jesc/index_ja.html)](https://nlp.stanford.edu/projects/jesc/index_ja.html)**
|
16 |
+
|
17 |
+
JESC is the product of a collaboration between Stanford University, Google Brain, and Rakuten Institute of Technology. It was created by crawling the internet for movie and tv subtitles and aligining their captions. It is one of the largest freely available EN-JA corpus, and covers the poorly represented domain of colloquial language.
|
18 |
+
|
19 |
+
You can download the scripts, tools, and crawlers used to create this dataset on *[Github](https://github.com/rpryzant/JESC)**.
|
20 |
+
**[You can read the paper here](https://arxiv.org/abs/1710.10639)**.
|
21 |
+
|
22 |
+
### How to use
|
23 |
+
|
24 |
+
```
|
25 |
+
from datasets import load_dataset
|
26 |
+
dataset = load_dataset("Hoshikuzu/JESC")
|
27 |
+
```
|
28 |
+
If data loading times are too long and boring, use Streaming.
|
29 |
+
|
30 |
+
```
|
31 |
+
from datasets import load_dataset
|
32 |
+
dataset = load_dataset("Hoshikuzu/JESC", streaming=True)
|
33 |
+
```
|
34 |
+
|
35 |
+
### Data Instances
|
36 |
+
For example:
|
37 |
+
|
38 |
+
```json
|
39 |
+
{
|
40 |
+
'id': 0,
|
41 |
+
'score': 1.2499920129776,
|
42 |
+
'translation': {
|
43 |
+
'en': 'Such is God’s forgiveness.',
|
44 |
+
'ja': 'それは神の赦しの故だ。'
|
45 |
+
}
|
46 |
+
}
|
47 |
+
```
|
48 |
+
### Contents ###
|
49 |
+
1. A large corpus consisting of 2.8 million sentences.
|
50 |
+
2. Translations of casual language, colloquialisms, expository writing, and narrative discourse. These are domains that are hard to find in JA-EN MT.
|
51 |
+
3. Pre-processed data, including tokenized train/dev/test splits.
|
52 |
+
4. Code for making your own crawled datasets and tools for manipulating MT data.
|
53 |
+
|
54 |
+
### Data Splits
|
55 |
+
Only a `train` split is provided.
|
56 |
+
|
57 |
+
### Licensing Information ###
|
58 |
+
These data are released under a Creative Commons (CC) license.
|
59 |
+
|
60 |
+
### Citation Information
|
61 |
+
|
62 |
+
@ARTICLE{pryzant_jesc_2018,
|
63 |
+
author = {{Pryzant}, R. and {Chung}, Y. and {Jurafsky}, D. and {Britz}, D.},
|
64 |
+
title = "{JESC: Japanese-English Subtitle Corpus}",
|
65 |
+
journal = {Language Resources and Evaluation Conference (LREC)},
|
66 |
+
keywords = {Computer Science - Computation and Language},
|
67 |
+
year = 2018
|
68 |
+
}
|