logiqa-zh / README.md
jiacheng-ye's picture
Create README.md
15532ac
|
raw
history blame
2.34 kB
metadata
task_categories:
  - question-answering
language:
  - zh
pretty_name: LogiQA-zh
size_categories:
  - 1K<n<10K
paperswithcode_id: logiqa
dataset_info:
  features:
    - name: context
      dtype: string
    - name: query
      dtype: string
    - name: options
      sequence:
        dtype: string
    - name: correct_option
      dtype: string
  splits:
    - name: train
      num_examples: 7376
    - name: validation
      num_examples: 651
    - name: test
      num_examples: 651

Dataset Card for LogiQA

Dataset Description

  • Homepage:
  • Repository:
  • Paper:
  • Leaderboard:
  • Point of Contact:

Dataset Summary

LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the Chinese versions only.

Dataset Structure

Data Instances

An example from train looks as follows:

{'context': '有些广东人不爱吃辣椒.因此,有些南方人不爱吃辣椒.',
 'query': '以下哪项能保证上述论证的成立?',
 'options': ['有些广东人爱吃辣椒',
  '爱吃辣椒的有些是南方人',
  '所有的广东人都是南方人',
  '有些广东人不爱吃辣椒也不爱吃甜食'],
 'correct_option': 2}

Data Fields

  • context: a string feature.
  • query: a string feature.
  • answers: a list feature containing string features.
  • correct_option: a string feature.

Data Splits

train validation test
7376 651 651

Additional Information

Dataset Curators

The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.

Licensing Information

[More Information Needed]

Citation Information

@article{liu2020logiqa,
  title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
  author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
  journal={arXiv preprint arXiv:2007.08124},
  year={2020}
}

Contributions

@jiacheng-ye added this Chinese dataset. @lucasmccabe added the English dataset.