andreaskoepf commited on
Commit
701e942
1 Parent(s): 575e1a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -12
README.md CHANGED
@@ -1,25 +1,55 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- Supervised fine-tuning of falcon40b with a mix of OASST top-2 threads and synthetic instruction datasets. Exported at end of 2nd epoch.
6
 
 
 
7
 
8
- - base model: [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
9
- - [sampling report](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_sampling_noprefix2.json), [multiligual-60](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_multilingual_noprefix2.json)
10
- - wandb: https://wandb.ai/open-assistant/public-sft/runs/feplc450
11
- - checkpoint: 1226 steps
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  Model:
15
  ```
16
  falcon-40b:
17
  dtype: bf16
18
- log_dir: "falcon_log_40b"
19
  learning_rate: 1e-5
20
  model_name: "tiiuae/falcon-40b"
21
  deepspeed_config: configs/zero3_config_falcon.json
22
- output_dir: falcon
23
  weight_decay: 0.0
24
  max_length: 2048
25
  warmup_steps: 20
@@ -28,14 +58,13 @@ falcon-40b:
28
  per_device_train_batch_size: 18
29
  per_device_eval_batch_size: 10
30
  eval_steps: 120
31
- #save_steps: 80
 
32
  num_train_epochs: 8
33
  save_total_limit: 4
34
  use_flash_attention: false
35
  residual_dropout: 0.3
36
  residual_dropout_lima: true
37
- sort_by_length: false
38
- save_strategy: steps
39
  ```
40
 
41
 
@@ -49,8 +78,6 @@ sft9-stage2:
49
  # grade_school_math_instructions: 100.00% (8351)
50
  # dolly15k: 100.00% (14250)
51
 
52
- save_strategy: steps # epoch seems not to work, gets stuck with DS 0.9.1
53
- save_steps: 613
54
  use_custom_sampler: true
55
  datasets:
56
  - oasst_export:
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ - de
6
+ - es
7
+ - fr
8
+ tags:
9
+ - sft
10
+ inference: false
11
+ datasets:
12
+ - OpenAssistant/oasst1
13
+ - databricks/databricks-dolly-15k
14
  ---
15
 
16
+ # Open-Assistant Falcon 40B SFT MIX Model
17
 
18
+ This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM.
19
+ It was trained on a mixture of OASST top-2 threads (exported on July 2, 2023), Dolly-15k and synthetic instruction datasets (see dataset configuration below).
20
 
21
+ ## Model Details
 
 
 
22
 
23
+ - **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b)
24
+ - **Model type:** Causal decoder-only transformer language model
25
+ - **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
26
+ - **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_sampling_noprefix2.json), [multiligual-60](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_multilingual_noprefix2.json)
27
+ - **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/)
28
+ - **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/feplc450) (checkpoint: 1226 steps)
29
+ - **License:** Apache 2.0
30
+ - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
31
+
32
+ ## Prompting
33
+
34
+ Two special tokens are used to mark the beginning of user and assistant turns:
35
+ `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
36
+
37
+ Input prompt example:
38
+ ```
39
+ <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
40
+ ```
41
+ The input ends with the `<|assistant|>` token to signal that the model should
42
+ start generating the assistant reply.
43
+
44
+ ## Configuration Details
45
 
46
  Model:
47
  ```
48
  falcon-40b:
49
  dtype: bf16
 
50
  learning_rate: 1e-5
51
  model_name: "tiiuae/falcon-40b"
52
  deepspeed_config: configs/zero3_config_falcon.json
 
53
  weight_decay: 0.0
54
  max_length: 2048
55
  warmup_steps: 20
 
58
  per_device_train_batch_size: 18
59
  per_device_eval_batch_size: 10
60
  eval_steps: 120
61
+ save_strategy: steps
62
+ save_steps: 613
63
  num_train_epochs: 8
64
  save_total_limit: 4
65
  use_flash_attention: false
66
  residual_dropout: 0.3
67
  residual_dropout_lima: true
 
 
68
  ```
69
 
70
 
 
78
  # grade_school_math_instructions: 100.00% (8351)
79
  # dolly15k: 100.00% (14250)
80
 
 
 
81
  use_custom_sampler: true
82
  datasets:
83
  - oasst_export: