no_robots_enfr / README.md
LsTam's picture
Update README.md
76f520b verified
metadata
language:
  - fr
  - en
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: eval
        path: data/eval-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: category
      dtype: string
    - name: output
      dtype: string
    - name: query
      dtype: string
    - name: qid
      dtype: int64
    - name: fr_query
      dtype: string
    - name: fr_output
      dtype: string
  splits:
    - name: train
      num_bytes: 22352415.03147541
      num_examples: 7866
    - name: eval
      num_bytes: 1810130.7367213115
      num_examples: 637
    - name: test
      num_bytes: 1838547.2318032787
      num_examples: 647
  download_size: 16266132
  dataset_size: 26001093

Dataset Card for "no_robots_enfr"

This is a filtered version of HuggingFaceH4/no_robots, then traduced to french with Deepl pro API, the best translation solution available on the market.

Our goal is to gather french data for one turn chatbot, on general subjects. We filtered few data from the original dataset:

  • We kept only the one turn questions
  • We took out any data where a system role is settle at the beginning, as our LLM will have a unique role that we don't have to define before a query.
  • We kept the category information from the original dataset
Category Number of Data Mean Words (Query) Mean Words (Output)
Brainstorm 1120 35 217
Generation 4560 35 177
Rewrite 660 258 206
Open QA 1240 12 73
Classify 350 121 29
Summarize 420 238 64
Coding 350 55 124
Extract 190 270 36
Closed QA 260 217 22
---------------- ---------------- -------------------- ---------------------
General Dataset 9150 71 150

Depending on our need we will filter those data by category to not inject hallicination in our fine-tuning.

The splits are made as each split have the same proportion of each categories: Train dataset size: 7866 Eval dataset size: 637 Test dataset size: 647