aoxo commited on
Commit
eedd7b8
1 Parent(s): 8bf1880

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -51
README.md CHANGED
@@ -1,71 +1,61 @@
1
  ---
2
  base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
3
  library_name: peft
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
-
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
 
46
  ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
@@ -77,28 +67,47 @@ Use the code below to get started with the model.
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
 
 
 
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
 
 
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
  ## Evaluation
104
 
 
1
  ---
2
  base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
3
  library_name: peft
4
+ license: apache-2.0
5
+ datasets:
6
+ - Respair/sharegpt_chatml_compressed
7
+ - diwank/llmlingua-compressed-text
8
+ - AlexMaclean/wikipedia-deletion-compressions
9
+ - AlexMaclean/all-deletion-compressions
10
+ - sentence-transformers/sentence-compression
11
+ language:
12
+ - en
13
+ tags:
14
+ - compression
15
  ---
16
 
17
  # Model Card for Model ID
18
 
19
+ Memories - Token Compressor for Long-Range Dependency Conversations
 
20
 
21
 
22
  ## Model Details
23
 
24
  ### Model Description
25
 
26
+ This model is a fine-tuned version of the Llama 3.1 8B 4-bit model, specifically trained for token compression tasks. It uses LoRA (Low-Rank Adaptation) for efficient fine-tuning while maintaining the base model's performance.
 
 
 
 
 
 
 
 
 
 
27
 
28
+ - **Developed by:** Alosh Denny
29
+ - **Funded by [optional]:** nil
30
+ - **Shared by [optional]:** nil
31
+ - **Model type:** Token Compressor for Memories
32
+ - **Language(s) (NLP):** English
33
+ - **License:** apache-2.0
 
34
 
35
  ## Uses
36
 
 
 
37
  ### Direct Use
38
 
39
+ This model is designed for token compression tasks. It can be used to generate more concise versions of input text while preserving the essential meaning.
 
 
40
 
41
  ### Downstream Use [optional]
42
 
43
+ The compressed outputs from this model can be used in various NLP applications where text length is a constraint, such as summarization, efficient text storage, or as input for other language models with token limits.
 
 
44
 
45
  ### Out-of-Scope Use
46
 
47
+ This model should not be used for tasks that require full preservation of the original text or where nuanced details are critical. It's not suitable for legal, medical, or other domains where precise wording is essential.
 
 
48
 
49
  ## Bias, Risks, and Limitations
50
 
51
+ - The model may inadvertently remove important context or nuance during compression.
52
+ - There might be biases inherited from the base Llama 3.1 model or introduced during fine-tuning.
53
+ - The model's performance may vary depending on the input text's domain or complexity.
54
 
55
  ### Recommendations
56
 
57
+ - Users should review the compressed outputs for accuracy and appropriateness before use in critical applications.
58
+ - It's advisable to test the model on a diverse range of inputs to understand its performance across different text types and domains.
 
59
 
60
  ## How to Get Started with the Model
61
 
 
67
 
68
  ### Training Data
69
 
70
+ The model was trained on a dataset compiled from various sources, including:
71
+ - Respair/sharegpt_chatml_compressed
72
+ - diwank/llmlingua-compressed-text
73
+ - AlexMaclean/wikipedia-deletion-compressions
74
+ - AlexMaclean/all-deletion-compressions
75
+ - sentence-transformers/sentence-compression
76
 
77
  ### Training Procedure
78
 
79
+ #### Preprocessing
 
 
 
 
80
 
81
+ Prompt-response pairs were processed from the datasets and compiled into a single dataset (available at https://huggingface.co/datasets/aoxo/token_compressor). Unwanted characters, trailing whitespaces and inverted commas were voided.
82
 
83
  #### Training Hyperparameters
84
 
85
+ #### Training Hyperparameters
 
 
 
 
86
 
87
+ - **Training regime:** bf16 mixed precision
88
+ - **Optimizer:** paged_adamw_8bit
89
+ - **Learning rate:** 2e-4
90
+ - **LR scheduler:** cosine
91
+ - **Batch size:** 4 per device
92
+ - **Gradient accumulation steps:** 16
93
+ - **Number of epochs:** 10
94
+ - **Max steps:** 700,472
95
+
96
+ #### LoRA Configuration
97
+
98
+ - **r:** 8
99
+ - **lora_alpha:** 16
100
+ - **lora_dropout:** 0.05
101
+ - **bias:** none
102
+ - **task_type:** CAUSAL_LM
103
+
104
+ #### Speeds, Sizes, Times
105
+
106
+ - **Total Training Compute Throughput:** 8.62 PFLOPS
107
+ - **Total Logged Training Time:** 1422.31 hours
108
+ - **Start Time:** 07-21-2024 02:02:32
109
+ - **End Time:** 09-18-2024 08:21:08
110
+ - **Checkpoint Size (adapter):** 13,648,432 bytes
111
 
112
  ## Evaluation
113