Nikita Martynov commited on
Commit
fb410de
1 Parent(s): e9d9ed3
README.md CHANGED
@@ -1,3 +1,307 @@
1
  ---
 
 
 
 
 
 
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ language:
7
+ - ru
8
  license: mit
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 10K<n<20k
13
+ task_categories:
14
+ - text-generation
15
+ pretty_name: Russian Spellcheck Benchmark
16
+ language_bcp47:
17
+ - ru-RU
18
+ tags:
19
+ - spellcheck
20
+ - russian
21
  ---
22
+
23
+ # Dataset Card for Russian Spellcheck Benchmark
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Repository:** [SAGE](https://github.com/ai-forever/sage)
53
+ - **Paper:** [arXiv:2308.09435](https://arxiv.org/abs/2308.09435)
54
+ - **Point of Contact:** nikita.martynov.98@list.ru
55
+
56
+ ### Dataset Summary
57
+
58
+ Spellcheck Benchmark includes four datasets, each of which consists of pairs of sentences in Russian language.
59
+ Each pair embodies sentence, which may contain spelling errors, and its corresponding correction.
60
+ Datasets were gathered from various sources and domains including social networks, internet blogs, github commits, medical anamnesis, literature, news, reviews and more.
61
+
62
+ All datasets were passed through two-stage manual labeling pipeline.
63
+ The correction of a sentence is defined by an agreement of at least two human annotators.
64
+ Manual labeling scheme accounts for jargonisms, collocations and common language, hence in some cases it encourages
65
+ annotators not to amend a word in favor of preserving style of a text.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ - **Task:** automatic spelling correction.
70
+ - **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
71
+
72
+
73
+ ### Languages
74
+
75
+ Russian.
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ #### RUSpellRU
82
+
83
+ - **Size of downloaded dataset files:** 3.64 Mb
84
+ - **Size of the generated dataset:** 1.29 Mb
85
+ - **Total amount of disk used:** 4.93 Mb
86
+
87
+ An example of "train" / "test" looks as follows
88
+ ```
89
+ {
90
+ "source": "очень классная тетка ктобы что не говорил.",
91
+ "correction": "очень классная тетка кто бы что ни говорил",
92
+ }
93
+ ```
94
+
95
+ #### MultidomainGold
96
+
97
+ - **Size of downloaded dataset files:** 15.05 Mb
98
+ - **Size of the generated dataset:** 5.43 Mb
99
+ - **Total amount of disk used:** 20.48 Mb
100
+
101
+ An example of "test" looks as follows
102
+ ```
103
+ {
104
+ "source": "Ну что могу сказать... Я заказала 2 вязанных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока одевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень тоской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он постоянно отклонял мой запрос. В общем не советую.",
105
+ "correction": "Ну что могу сказать... Я заказала 2 вязаных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока надевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень доской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он посто��нно отклонял мой запрос. В общем не советую.",
106
+ "domain": "reviews",
107
+
108
+ }
109
+ ```
110
+
111
+ #### MedSpellcheck
112
+
113
+ - **Size of downloaded dataset files:** 1.49 Mb
114
+ - **Size of the generated dataset:** 0.54 Mb
115
+ - **Total amount of disk used:** 2.03 Mb
116
+
117
+ An example of "test" looks as follows
118
+ ```
119
+ {
120
+ "source": "Кровотечения, поерации в анамнезе отрицает",
121
+ "correction": "Кровотечения, операции в анамнезе отрицает",
122
+ }
123
+ ```
124
+
125
+
126
+ #### GitHubTypoCorpusRu
127
+
128
+ - **Size of downloaded dataset files:** 1.23 Mb
129
+ - **Size of the generated dataset:** 0.48 Mb
130
+ - **Total amount of disk used:** 1.71 Mb
131
+
132
+ An example of "test" looks as follows
133
+ ```
134
+ {
135
+ "source": "## Запросы и ответа содержат заголовки",
136
+ "correction": "## Запросы и ответы содержат заголовки",
137
+ }
138
+ ```
139
+
140
+
141
+ ### Data Fields
142
+
143
+ #### RUSpellRU
144
+
145
+ - `source`: a `string` feature
146
+ - `correction`: a `string` feature
147
+ - `domain`: a `string` feature
148
+
149
+
150
+ #### MultidomainGold
151
+
152
+ - `source`: a `string` feature
153
+ - `correction`: a `string` feature
154
+ - `domain`: a `string` feature
155
+
156
+ #### MedSpellcheck
157
+
158
+ - `source`: a `string` feature
159
+ - `correction`: a `string` feature
160
+ - `domain`: a `string` feature
161
+
162
+ #### GitHubTypoCorpusRu
163
+
164
+ - `source`: a `string` feature
165
+ - `correction`: a `string` feature
166
+ - `domain`: a `string` feature
167
+
168
+
169
+
170
+ ### Data Splits
171
+
172
+ #### RUSpellRU
173
+ | |train|test|
174
+ |---|---:|---:|
175
+ |RUSpellRU|2000|2008|
176
+
177
+ #### MultidomainGold
178
+
179
+ | |train|test|
180
+ |---|---:|---:|
181
+ |web|386|756|
182
+ |news|361|245|
183
+ |social_media|430|200|
184
+ |reviews|584|586|
185
+ |subtitles|1810|1810|
186
+ |strategic_documents|-|250|
187
+ |literature|-|260|
188
+
189
+ #### MedSpellcheck
190
+
191
+ | |test|
192
+ |---|---:|
193
+ |MedSpellcheck|1054|
194
+
195
+ #### GitHubTypoCorpusRu
196
+
197
+ | |test|
198
+ |---|---:|
199
+ |GitHubTypoCorpusRu|868|
200
+
201
+
202
+ ## Dataset Creation
203
+
204
+ ### Source Data
205
+
206
+ #### Initial Data Collection and Normalization
207
+
208
+ The datasets are chosen in accordance with the specified criteria.
209
+ First, domain variation: half of the datasets are chosen from different domains to ensure diversity, while the remaining half are from a single domain.
210
+ Another criterion is spelling orthographic mistakes:
211
+ the datasets exclusively comprised mistyping, omitting grammatical or more complex errors of nonnative speakers.
212
+ - **RUSpellRU**: texts collected from ([LiveJournal](https://www.livejournal.com/media)), with manually corrected typos and errors;
213
+ - **MultidomainGold**: examples from several text sources including the open web, news, social media, reviews, subtitles, policy documents and literary works were collected:
214
+
215
+ *Aranea web-corpus* is a family of multilanguage gigaword web-corpora collected from Internet resources. The texts in the corpora are evenly distributed across periods, writing styles and topics they cover. We randomly picked the sentences from Araneum Russicum, which is harvested from the Russian part of the web.
216
+
217
+ *Literature* is a collection of Russian poems and prose of different classical literary works. We randomly picked sentences from the source dataset that were gathered from Ilibrary, LitLib, and Wikisource.
218
+
219
+ *News*, as the name suggests, covers news articles on various topics such as sports, politics, environment, economy etc. The passages are randomly picked from the summarization dataset Gazeta.ru.
220
+
221
+ *Social media* is the text domain from social media platforms marked with specific hashtags. These texts are typically short, written in an informal style and may contain slang, emojis and obscene lexis.
222
+
223
+ *Strategic Documents* is part of the dataset the Ministry of Economic Development of the Russian Federation collected. Texts are written in a bureaucratic manner, rich in embedded entities, and have complex syntactic and discourse structures. The full version of the dataset has been previously used in the RuREBus shared task.
224
+
225
+ - **MedSpellChecker**: texts with errors from medical anamnesis;
226
+ - **GitHubTypoCorpusRu**: spelling errors and typos in commits from [GitHub](https://github.com);
227
+
228
+
229
+ ### Annotations
230
+
231
+ #### Annotation process
232
+
233
+ We set up two-stage annotation project via a crowd-sourcing platform Toloka:
234
+ 1. Data gathering stage: we provide the texts with possible mistakes to annotators and ask them to write the sentence correctly;
235
+ 2. Validation stage: we provide annotators with the pair of sentences (source and its corresponding correction from the previous stage) and ask them to check if the correction is right.
236
+
237
+ We prepared instructions for annotators for each task. The instructions ask annotators to correct misspellings if it does not alter the original style of the text.
238
+ Instructions do not provide rigorous criteria on the matter of distinguishing the nature of an error in terms of its origin - whether it came from an urge to endow a sentence with particular stylistic features or from unintentional spelling violation since it is time-consuming and laborious to describe every possible case of employing slang, dialect, collo- quialisms, etc. instead of proper language. Instructions also do not distinguish errors that come from the geographical or social background of the source. Instead, we rely on annotators’ knowledge and understanding of a language since, in this work, the important factor is to preserve the original style of the text.
239
+ To ensure we receive qualified expertise, we set up test iteration on a small subset of the data for both stages. We manually validated the test results and selected annotators, who processed at least six samples (2% of the total test iteration) and did not make a single error. After test iteration, we cut 85% and 86% of labellers for gathering and validation stages.
240
+ We especially urge annotators to correct mistakes associated with the substitution of the letters "ё" "й" and "щ" for corresponding "е" "и" and "ш" and not to explain abbreviations and correct punctuation errors. Each annotator is also warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion).
241
+
242
+ #### Who are the annotators?
243
+
244
+ Native Russian speakers who passed the language exam.
245
+
246
+
247
+ ## Considerations for Using the Data
248
+
249
+ ### Discussion of Biases
250
+
251
+ We clearly state our work’s aims and
252
+ implications, making it open source and transparent. The data will be available under a public license. As our research involved anonymized textual data, informed consent from human participants was not required. However, we obtained permission to access publicly available datasets and
253
+ ensured compliance with any applicable terms of
254
+ service or usage policies.
255
+
256
+ ### Other Known Limitations
257
+
258
+ The data used in our research may be limited to specific
259
+ domains, preventing comprehensive coverage of
260
+ all possible text variations. Despite these limitations, we tried to address the issue of data diversity
261
+ by incorporating single-domain and multi-domain
262
+ datasets in the proposed research. This approach
263
+ allowed us to shed light on the diversity and variances within the data, providing valuable insights
264
+ despite the inherent constraints.
265
+
266
+ We primarily focus on the Russian language. Further
267
+ research is needed to expand the datasets for a wider
268
+ range of languages.
269
+
270
+ ## Additional Information
271
+
272
+ ### Future plans
273
+
274
+ We are planning to expand our benchmark with both new Russian datasets and datasets in other languages including (but not limited to) European and CIS languages.
275
+ If you would like to contribute, please contact us.
276
+
277
+ ### Dataset Curators
278
+
279
+ Nikita Martynov nikita.martynov.98@list.ru
280
+
281
+ ### Licensing Information
282
+
283
+ All our datasets are published by MIT License.
284
+
285
+ ### Citation Information
286
+ ```
287
+
288
+ @inproceedings{martynov2023augmentation,
289
+ title={Augmentation methods for spelling corruptions},
290
+ author={Martynov, Nikita and Baushenko, Mark and Abramov, Alexander and Fenogenova, Alena},
291
+ booktitle={Proceedings of the International Conference “Dialogue},
292
+ volume={2023},
293
+ year={2023}
294
+ }
295
+
296
+ @misc{martynov2023methodology,
297
+ title={A Methodology for Generative Spelling Correction
298
+ via Natural Spelling Errors Emulation across Multiple Domains and Languages},
299
+ author={Nikita Martynov and Mark Baushenko and Anastasia Kozlova and
300
+ Katerina Kolomeytseva and Aleksandr Abramov and Alena Fenogenova},
301
+ year={2023},
302
+ eprint={2308.09435},
303
+ archivePrefix={arXiv},
304
+ primaryClass={cs.CL}
305
+ }
306
+
307
+ ```
data/.DS_Store ADDED
Binary file (8.2 kB). View file
 
data/GitHubTypoCorpusRu/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MedSpellchecker/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/.DS_Store ADDED
Binary file (8.2 kB). View file
 
data/MultidomainGold/literature/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/news/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/news/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/reviews/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/reviews/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/social_media/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/social_media/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/strategic_documents/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/subtitles/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/subtitles/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/web/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/MultidomainGold/web/train.json ADDED
The diff for this file is too large to render. See raw diff
 
data/RUSpellRU/.DS_Store ADDED
Binary file (6.15 kB). View file
 
data/RUSpellRU/test.json ADDED
The diff for this file is too large to render. See raw diff
 
data/RUSpellRU/train.json ADDED
The diff for this file is too large to render. See raw diff
 
spellcheck_benchmark.py ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """The Russian Spellcheck Benchmark"""
18
+
19
+ import os
20
+ import json
21
+ import pandas as pd
22
+ from typing import List, Dict, Optional
23
+
24
+ import datasets
25
+
26
+
27
+ _RUSSIAN_SPELLCHECK_BENCHMARK_DESCRIPTION = """
28
+ Russian Spellcheck Benchmark is a new benchmark for spelling correction in Russian language.
29
+ It includes four datasets, each of which consists of pairs of sentences in Russian language.
30
+ Each pair embodies sentence, which may contain spelling errors, and its corresponding correction.
31
+ Datasets were gathered from various sources and domains including social networks, internet blogs, github commits,
32
+ medical anamnesis, literature, news, reviews and more.
33
+ """
34
+
35
+ _MULTIDOMAIN_GOLD_DESCRIPTION = """
36
+ MultidomainGold is a dataset of 3500 sentence pairs
37
+ dedicated to a problem of automatic spelling correction in Russian language.
38
+ The dataset is gathered from seven different domains including news, Russian classic literature,
39
+ social media texts, open web, strategic documents, subtitles and reviews.
40
+ It has been passed through two-stage manual labeling process with native speakers as annotators
41
+ to correct spelling violation and preserve original style of text at the same time.
42
+ """
43
+
44
+ _GITHUB_TYPO_CORPUS_RU_DESCRIPTION = """
45
+ GitHubTypoCorpusRu is a manually labeled part of GitHub Typo Corpus https://arxiv.org/abs/1911.12893.
46
+ The sentences with "ru" tag attached to them have been extracted from GitHub Typo Corpus
47
+ and pass them through manual labeling to ensure the corresponding corrections are right.
48
+ """
49
+
50
+ _RUSPELLRU_DESCRIPTION = """
51
+ RUSpellRU is a first benchmark on the task of automatic spelling correction for Russian language
52
+ introduced in https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
53
+ Original sentences are drawn from social media domain and labeled by
54
+ human annotators.
55
+ """
56
+
57
+ _MEDSPELLCHECK_DESCRIPTION = """
58
+ The dataset is taken from GitHub repo associated with eponymos project https://github.com/DmitryPogrebnoy/MedSpellChecker.
59
+ Original sentences are taken from anonymized medical anamnesis and passed through
60
+ two-stage manual labeling pipeline.
61
+ """
62
+
63
+ _RUSSIAN_SPELLCHECK_BENCHMARK_CITATION = """ # TODO: add citation"""
64
+
65
+ _MULTIDOMAIN_GOLD_CITATION = """ # TODO: add citation from Dialog"""
66
+
67
+ _GITHUB_TYPO_CORPUS_RU_CITATION = """
68
+ @article{DBLP:journals/corr/abs-1911-12893,
69
+ author = {Masato Hagiwara and
70
+ Masato Mita},
71
+ title = {GitHub Typo Corpus: {A} Large-Scale Multilingual Dataset of Misspellings
72
+ and Grammatical Errors},
73
+ journal = {CoRR},
74
+ volume = {abs/1911.12893},
75
+ year = {2019},
76
+ url = {http://arxiv.org/abs/1911.12893},
77
+ eprinttype = {arXiv},
78
+ eprint = {1911.12893},
79
+ timestamp = {Wed, 08 Jan 2020 15:28:22 +0100},
80
+ biburl = {https://dblp.org/rec/journals/corr/abs-1911-12893.bib},
81
+ bibsource = {dblp computer science bibliography, https://dblp.org}
82
+ }
83
+ """
84
+
85
+ _RUSPELLRU_CITATION = """
86
+ @inproceedings{Shavrina2016SpellRuevalT,
87
+ title={SpellRueval : the FiRSt Competition on automatiC Spelling CoRReCtion FoR RuSSian},
88
+ author={Tatiana Shavrina and Россия Москва and Москва Яндекс and Россия and Россия Долгопрудный},
89
+ year={2016}
90
+ }
91
+ """
92
+
93
+ _LICENSE = "apache-2.0"
94
+
95
+
96
+ class RussianSpellcheckBenchmarkConfig(datasets.BuilderConfig):
97
+ """BuilderConfig for RussianSpellcheckBenchmark."""
98
+
99
+ def __init__(
100
+ self,
101
+ data_urls: Dict[str,str],
102
+ features: List[str],
103
+ citation: str,
104
+ **kwargs,
105
+ ):
106
+ """BuilderConfig for RussianSpellcheckBenchmark.
107
+ Args:
108
+ features: *list[string]*, list of the features that will appear in the
109
+ feature dict. Should not include "label".
110
+ data_urls: *dict[string]*, urls to download the zip file from.
111
+ **kwargs: keyword arguments forwarded to super.
112
+ """
113
+ super(RussianSpellcheckBenchmarkConfig, self).__init__(version=datasets.Version("0.0.1"), **kwargs)
114
+ self.data_urls = data_urls
115
+ self.features = features
116
+ self.citation = citation
117
+
118
+
119
+ class RussianSpellcheckBenchmark(datasets.GeneratorBasedBuilder):
120
+ """Russian Spellcheck Benchmark."""
121
+
122
+ BUILDER_CONFIGS = [
123
+ RussianSpellcheckBenchmarkConfig(
124
+ name="GitHubTypoCorpusRu",
125
+ description=_GITHUB_TYPO_CORPUS_RU_DESCRIPTION,
126
+ data_urls={
127
+ "test": "data/GitHubTypoCorpusRu/test.json",
128
+ },
129
+ features=["source", "correction", "domain"],
130
+ citation=_GITHUB_TYPO_CORPUS_RU_CITATION,
131
+ ),
132
+ RussianSpellcheckBenchmarkConfig(
133
+ name="MedSpellchecker",
134
+ description=_MEDSPELLCHECK_DESCRIPTION,
135
+ data_urls={
136
+ "test": "data/MedSpellchecker/test.json",
137
+ },
138
+ features=["source", "correction", "domain"],
139
+ citation="",
140
+ ),
141
+ RussianSpellcheckBenchmarkConfig(
142
+ name="MultidomainGold",
143
+ description=_MULTIDOMAIN_GOLD_DESCRIPTION,
144
+ data_urls={
145
+ "train": "data/MultidomainGold/train.json",
146
+ "test": "data/MultidomainGold/test.json",
147
+ },
148
+ features=["source", "correction", "domain"],
149
+ citation=_MULTIDOMAIN_GOLD_CITATION,
150
+ ),
151
+ RussianSpellcheckBenchmarkConfig(
152
+ name="RUSpellRU",
153
+ description=_RUSPELLRU_DESCRIPTION,
154
+ data_urls={
155
+ "test": "data/RUSpellRU/test.json",
156
+ "train": "data/RUSpellRU/train.json",
157
+ },
158
+ features=["source", "correction", "domain"],
159
+ citation=_RUSPELLRU_CITATION,
160
+ ),
161
+ ]
162
+
163
+ def _info(self) -> datasets.DatasetInfo:
164
+ features = {
165
+ "source": datasets.Value("string"),
166
+ "correction": datasets.Value("string"),
167
+ "domain": datasets.Value("string"),
168
+ }
169
+
170
+ return datasets.DatasetInfo(
171
+ features=datasets.Features(features),
172
+ description=_RUSSIAN_SPELLCHECK_BENCHMARK_DESCRIPTION + self.config.description,
173
+ license=_LICENSE,
174
+ citation=self.config.citation + "\n" + _RUSSIAN_SPELLCHECK_BENCHMARK_CITATION,
175
+ )
176
+
177
+ def _split_generators(
178
+ self, dl_manager: datasets.DownloadManager
179
+ ) -> List[datasets.SplitGenerator]:
180
+ urls_to_download = self.config.data_urls
181
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
182
+ if self.config.name == "GitHubTypoCorpusRu" or \
183
+ self.config.name == "MedSpellchecker":
184
+ return [
185
+ datasets.SplitGenerator(
186
+ name=datasets.Split.TEST,
187
+ gen_kwargs={
188
+ "data_file": downloaded_files["test"],
189
+ "split": datasets.Split.TEST,
190
+ },
191
+ )
192
+ ]
193
+ return [
194
+ datasets.SplitGenerator(
195
+ name=datasets.Split.TRAIN,
196
+ gen_kwargs={
197
+ "data_file": downloaded_files["train"],
198
+ "split": datasets.Split.TRAIN,
199
+ },
200
+ ),
201
+ datasets.SplitGenerator(
202
+ name=datasets.Split.TEST,
203
+ gen_kwargs={
204
+ "data_file": downloaded_files["test"],
205
+ "split": datasets.Split.TEST,
206
+ },
207
+ )
208
+ ]
209
+
210
+ def _generate_examples(self, data_file, split):
211
+ with open(data_file, encoding="utf-8") as f:
212
+ key = 0
213
+ for line in f:
214
+ row = json.loads(line)
215
+ example = {feature: row[feature] for feature in self.config.features}
216
+ yield key, example
217
+ key += 1