AIDXteam commited on
Commit
9f3e0d8
β€’
1 Parent(s): 7ff5bac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -25,9 +25,9 @@ pipeline_tag: text-generation
25
  이 μ•„ν‚€ν…μ²˜λŠ” ν…μŠ€νŠΈ 생성, μ§ˆμ˜μ‘λ‹΅, λ¬Έμ„œ μš”μ•½, 감정 뢄석과 같은 λ‹€μ–‘ν•œ μž‘μ—…μ—μ„œ νƒμ›”ν•œ μ„±λŠ₯을 λ³΄μ—¬μ€λ‹ˆλ‹€.
26
 
27
  # ❷ ν•™μŠ΅ 데이터
28
- - ktdsbaseLM v0.11은 총 3.6GB 크기의 데이터λ₯Ό λ°”νƒ•μœΌλ‘œ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 총 233만 건의 QnA 데이터λ₯Ό ν¬ν•¨ν•˜λ©°,
29
- κ·Έ 쀑 133만 건은 135개 μ˜μ—­μ˜ 객관식 문제둜 κ΅¬μ„±λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 이 μ˜μ—­μ—λŠ” ν•œκ΅­μ‚¬, μ‚¬νšŒ, 재무, 법λ₯ , 세무, μˆ˜ν•™, 생물, 물리, ν™”ν•™ 등이 ν¬ν•¨λ˜λ©°,
30
- Chain of Thought λ°©μ‹μœΌλ‘œ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€. λ˜ν•œ 130만 건의 주관식 λ¬Έμ œλŠ” ν•œκ΅­μ‚¬, 재무, 법λ₯ , 세무, μˆ˜ν•™ λ“± 100개 μ˜μ—­μ— 걸쳐 ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
31
  ν•™μŠ΅ 데이터 쀑 ν•œκ΅­μ˜ μ‚¬νšŒ κ°€μΉ˜μ™€ μΈκ°„μ˜ 감정을 μ΄ν•΄ν•˜κ³  μ§€μ‹œν•œ 사항에 따라 좜λ ₯ν•  수 μžˆλŠ” 데이터λ₯Ό ν•™μŠ΅ν•˜μ˜€μŠ΅λ‹ˆλ‹€.
32
  - ν•™μŠ΅ Instruction Datasets Format:
33
  <pre><code>{"prompt": "prompt text", "completion": "ideal generated text"}</code></pre>
@@ -83,9 +83,9 @@ optimized for various NLP tasks like text generation, question answering, docume
83
  # ❷ Training Data
84
 
85
  KTDSbaseLM v0.11 was trained on 3.6GB of data, comprising 2.33 million Q&A instances.
86
- This includes 1.33 million multiple-choice questions across 135 domains such as history,
87
  finance, law, tax, and science, trained with the Chain of Thought method. Additionally,
88
- 1.3 million short-answer questions cover 100 domains including history, finance, and law.
89
 
90
  **Training Instruction Dataset Format**:
91
  `{"prompt": "prompt text", "completion": "ideal generated text"}`
 
25
  이 μ•„ν‚€ν…μ²˜λŠ” ν…μŠ€νŠΈ 생성, μ§ˆμ˜μ‘λ‹΅, λ¬Έμ„œ μš”μ•½, 감정 뢄석과 같은 λ‹€μ–‘ν•œ μž‘μ—…μ—μ„œ νƒμ›”ν•œ μ„±λŠ₯을 λ³΄μ—¬μ€λ‹ˆλ‹€.
26
 
27
  # ❷ ν•™μŠ΅ 데이터
28
+ - ktdsbaseLM v0.11은 자체 κ°œλ°œν•œ 총 3.6GB 크기의 데이터λ₯Ό λ°”νƒ•μœΌλ‘œ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€. λͺ¨λ‘ 233만 건의 QnA, μš”μ•½, λΆ„λ₯˜ λ“± 데이터λ₯Ό ν¬ν•¨ν•˜λ©°,
29
+ κ·Έ 쀑 133만 건은 53개 μ˜μ—­μ˜ 객관식 문제둜 κ΅¬μ„±λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 이 μ˜μ—­μ—λŠ” ν•œκ΅­μ‚¬, μ‚¬νšŒ, 재무, 법λ₯ , 세무, μˆ˜ν•™, 생물, 물리, ν™”ν•™ 등이 ν¬ν•¨λ˜λ©°,
30
+ Chain of Thought λ°©μ‹μœΌλ‘œ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€. λ˜ν•œ 130만 건의 주관식 λ¬Έμ œλŠ” ν•œκ΅­μ‚¬, 재무, 법λ₯ , 세무, μˆ˜ν•™ λ“± 38개 μ˜μ—­μ— 걸쳐 ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
31
  ν•™μŠ΅ 데이터 쀑 ν•œκ΅­μ˜ μ‚¬νšŒ κ°€μΉ˜μ™€ μΈκ°„μ˜ 감정을 μ΄ν•΄ν•˜κ³  μ§€μ‹œν•œ 사항에 따라 좜λ ₯ν•  수 μžˆλŠ” 데이터λ₯Ό ν•™μŠ΅ν•˜μ˜€μŠ΅λ‹ˆλ‹€.
32
  - ν•™μŠ΅ Instruction Datasets Format:
33
  <pre><code>{"prompt": "prompt text", "completion": "ideal generated text"}</code></pre>
 
83
  # ❷ Training Data
84
 
85
  KTDSbaseLM v0.11 was trained on 3.6GB of data, comprising 2.33 million Q&A instances.
86
+ This includes 1.33 million multiple-choice questions across 53 domains such as history,
87
  finance, law, tax, and science, trained with the Chain of Thought method. Additionally,
88
+ 1.3 million short-answer questions cover 38 domains including history, finance, and law.
89
 
90
  **Training Instruction Dataset Format**:
91
  `{"prompt": "prompt text", "completion": "ideal generated text"}`