Bighost commited on
Commit
d4d545e
1 Parent(s): 035ef0b

Upload model

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
The diff for this file is too large to render. See raw diff
 
configuration_qwen.py ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ from transformers import PretrainedConfig
7
+
8
+
9
+ class QWenConfig(PretrainedConfig):
10
+ model_type = "qwen"
11
+ keys_to_ignore_at_inference = ["past_key_values"]
12
+
13
+ def __init__(
14
+ self,
15
+ vocab_size=151936,
16
+ hidden_size=4096,
17
+ num_hidden_layers=32,
18
+ num_attention_heads=32,
19
+ emb_dropout_prob=0.0,
20
+ attn_dropout_prob=0.0,
21
+ layer_norm_epsilon=1e-6,
22
+ initializer_range=0.02,
23
+ max_position_embeddings=8192,
24
+ scale_attn_weights=True,
25
+ use_cache=True,
26
+ bf16=False,
27
+ fp16=False,
28
+ fp32=False,
29
+ kv_channels=128,
30
+ rotary_pct=1.0,
31
+ rotary_emb_base=10000,
32
+ use_dynamic_ntk=True,
33
+ use_logn_attn=True,
34
+ use_flash_attn="auto",
35
+ intermediate_size=22016,
36
+ no_bias=True,
37
+ tie_word_embeddings=False,
38
+ **kwargs,
39
+ ):
40
+ self.vocab_size = vocab_size
41
+ self.hidden_size = hidden_size
42
+ self.intermediate_size = intermediate_size
43
+ self.num_hidden_layers = num_hidden_layers
44
+ self.num_attention_heads = num_attention_heads
45
+ self.emb_dropout_prob = emb_dropout_prob
46
+ self.attn_dropout_prob = attn_dropout_prob
47
+ self.layer_norm_epsilon = layer_norm_epsilon
48
+ self.initializer_range = initializer_range
49
+ self.scale_attn_weights = scale_attn_weights
50
+ self.use_cache = use_cache
51
+ self.max_position_embeddings = max_position_embeddings
52
+ self.bf16 = bf16
53
+ self.fp16 = fp16
54
+ self.fp32 = fp32
55
+ self.kv_channels = kv_channels
56
+ self.rotary_pct = rotary_pct
57
+ self.rotary_emb_base = rotary_emb_base
58
+ self.use_dynamic_ntk = use_dynamic_ntk
59
+ self.use_logn_attn = use_logn_attn
60
+ self.use_flash_attn = use_flash_attn
61
+ self.no_bias = no_bias
62
+ super().__init__(
63
+ tie_word_embeddings=tie_word_embeddings,
64
+ **kwargs
65
+ )
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chat_format": "chatml",
3
+ "do_sample": true,
4
+ "eos_token_id": 151643,
5
+ "max_new_tokens": 512,
6
+ "max_window_size": 6144,
7
+ "pad_token_id": 151643,
8
+ "top_k": 0,
9
+ "top_p": 0.5,
10
+ "transformers_version": "4.41.2"
11
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b255a2ee10bc92d2549cd98f37ad0614e4e451db8cfd35f6eda8a936ca2e1e57
3
+ size 4958459820
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9ac904f2fc4585d977b7a52860e26a3cd6c9ce2c7745dbd02ce7b5122888de2
3
+ size 4013683852
model.safetensors.index.json ADDED
@@ -0,0 +1,586 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 8972083360
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00002-of-00002.safetensors",
7
+ "transformer.h.0.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
8
+ "transformer.h.0.attn.c_attn.bias": "model-00001-of-00002.safetensors",
9
+ "transformer.h.0.attn.c_attn.weight": "model-00001-of-00002.safetensors",
10
+ "transformer.h.0.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
11
+ "transformer.h.0.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
12
+ "transformer.h.0.attn.c_proj.weight": "model-00001-of-00002.safetensors",
13
+ "transformer.h.0.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
14
+ "transformer.h.0.ln_1.weight": "model-00001-of-00002.safetensors",
15
+ "transformer.h.0.ln_2.weight": "model-00001-of-00002.safetensors",
16
+ "transformer.h.0.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
17
+ "transformer.h.0.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
18
+ "transformer.h.0.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
19
+ "transformer.h.0.mlp.w1.SCB": "model-00001-of-00002.safetensors",
20
+ "transformer.h.0.mlp.w1.weight": "model-00001-of-00002.safetensors",
21
+ "transformer.h.0.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
22
+ "transformer.h.0.mlp.w2.SCB": "model-00001-of-00002.safetensors",
23
+ "transformer.h.0.mlp.w2.weight": "model-00001-of-00002.safetensors",
24
+ "transformer.h.0.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
25
+ "transformer.h.1.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
26
+ "transformer.h.1.attn.c_attn.bias": "model-00001-of-00002.safetensors",
27
+ "transformer.h.1.attn.c_attn.weight": "model-00001-of-00002.safetensors",
28
+ "transformer.h.1.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
29
+ "transformer.h.1.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
30
+ "transformer.h.1.attn.c_proj.weight": "model-00001-of-00002.safetensors",
31
+ "transformer.h.1.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
32
+ "transformer.h.1.ln_1.weight": "model-00001-of-00002.safetensors",
33
+ "transformer.h.1.ln_2.weight": "model-00001-of-00002.safetensors",
34
+ "transformer.h.1.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
35
+ "transformer.h.1.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
36
+ "transformer.h.1.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
37
+ "transformer.h.1.mlp.w1.SCB": "model-00001-of-00002.safetensors",
38
+ "transformer.h.1.mlp.w1.weight": "model-00001-of-00002.safetensors",
39
+ "transformer.h.1.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
40
+ "transformer.h.1.mlp.w2.SCB": "model-00001-of-00002.safetensors",
41
+ "transformer.h.1.mlp.w2.weight": "model-00001-of-00002.safetensors",
42
+ "transformer.h.1.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
43
+ "transformer.h.10.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
44
+ "transformer.h.10.attn.c_attn.bias": "model-00001-of-00002.safetensors",
45
+ "transformer.h.10.attn.c_attn.weight": "model-00001-of-00002.safetensors",
46
+ "transformer.h.10.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
47
+ "transformer.h.10.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
48
+ "transformer.h.10.attn.c_proj.weight": "model-00001-of-00002.safetensors",
49
+ "transformer.h.10.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
50
+ "transformer.h.10.ln_1.weight": "model-00001-of-00002.safetensors",
51
+ "transformer.h.10.ln_2.weight": "model-00001-of-00002.safetensors",
52
+ "transformer.h.10.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
53
+ "transformer.h.10.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
54
+ "transformer.h.10.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
55
+ "transformer.h.10.mlp.w1.SCB": "model-00001-of-00002.safetensors",
56
+ "transformer.h.10.mlp.w1.weight": "model-00001-of-00002.safetensors",
57
+ "transformer.h.10.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
58
+ "transformer.h.10.mlp.w2.SCB": "model-00001-of-00002.safetensors",
59
+ "transformer.h.10.mlp.w2.weight": "model-00001-of-00002.safetensors",
60
+ "transformer.h.10.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
61
+ "transformer.h.11.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
62
+ "transformer.h.11.attn.c_attn.bias": "model-00001-of-00002.safetensors",
63
+ "transformer.h.11.attn.c_attn.weight": "model-00001-of-00002.safetensors",
64
+ "transformer.h.11.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
65
+ "transformer.h.11.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
66
+ "transformer.h.11.attn.c_proj.weight": "model-00001-of-00002.safetensors",
67
+ "transformer.h.11.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
68
+ "transformer.h.11.ln_1.weight": "model-00001-of-00002.safetensors",
69
+ "transformer.h.11.ln_2.weight": "model-00001-of-00002.safetensors",
70
+ "transformer.h.11.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
71
+ "transformer.h.11.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
72
+ "transformer.h.11.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
73
+ "transformer.h.11.mlp.w1.SCB": "model-00001-of-00002.safetensors",
74
+ "transformer.h.11.mlp.w1.weight": "model-00001-of-00002.safetensors",
75
+ "transformer.h.11.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
76
+ "transformer.h.11.mlp.w2.SCB": "model-00001-of-00002.safetensors",
77
+ "transformer.h.11.mlp.w2.weight": "model-00001-of-00002.safetensors",
78
+ "transformer.h.11.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
79
+ "transformer.h.12.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
80
+ "transformer.h.12.attn.c_attn.bias": "model-00001-of-00002.safetensors",
81
+ "transformer.h.12.attn.c_attn.weight": "model-00001-of-00002.safetensors",
82
+ "transformer.h.12.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
83
+ "transformer.h.12.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
84
+ "transformer.h.12.attn.c_proj.weight": "model-00001-of-00002.safetensors",
85
+ "transformer.h.12.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
86
+ "transformer.h.12.ln_1.weight": "model-00001-of-00002.safetensors",
87
+ "transformer.h.12.ln_2.weight": "model-00001-of-00002.safetensors",
88
+ "transformer.h.12.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
89
+ "transformer.h.12.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
90
+ "transformer.h.12.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
91
+ "transformer.h.12.mlp.w1.SCB": "model-00001-of-00002.safetensors",
92
+ "transformer.h.12.mlp.w1.weight": "model-00001-of-00002.safetensors",
93
+ "transformer.h.12.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
94
+ "transformer.h.12.mlp.w2.SCB": "model-00001-of-00002.safetensors",
95
+ "transformer.h.12.mlp.w2.weight": "model-00001-of-00002.safetensors",
96
+ "transformer.h.12.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
97
+ "transformer.h.13.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
98
+ "transformer.h.13.attn.c_attn.bias": "model-00001-of-00002.safetensors",
99
+ "transformer.h.13.attn.c_attn.weight": "model-00001-of-00002.safetensors",
100
+ "transformer.h.13.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
101
+ "transformer.h.13.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
102
+ "transformer.h.13.attn.c_proj.weight": "model-00001-of-00002.safetensors",
103
+ "transformer.h.13.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
104
+ "transformer.h.13.ln_1.weight": "model-00001-of-00002.safetensors",
105
+ "transformer.h.13.ln_2.weight": "model-00001-of-00002.safetensors",
106
+ "transformer.h.13.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
107
+ "transformer.h.13.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
108
+ "transformer.h.13.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
109
+ "transformer.h.13.mlp.w1.SCB": "model-00001-of-00002.safetensors",
110
+ "transformer.h.13.mlp.w1.weight": "model-00001-of-00002.safetensors",
111
+ "transformer.h.13.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
112
+ "transformer.h.13.mlp.w2.SCB": "model-00001-of-00002.safetensors",
113
+ "transformer.h.13.mlp.w2.weight": "model-00001-of-00002.safetensors",
114
+ "transformer.h.13.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
115
+ "transformer.h.14.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
116
+ "transformer.h.14.attn.c_attn.bias": "model-00001-of-00002.safetensors",
117
+ "transformer.h.14.attn.c_attn.weight": "model-00001-of-00002.safetensors",
118
+ "transformer.h.14.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
119
+ "transformer.h.14.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
120
+ "transformer.h.14.attn.c_proj.weight": "model-00001-of-00002.safetensors",
121
+ "transformer.h.14.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
122
+ "transformer.h.14.ln_1.weight": "model-00001-of-00002.safetensors",
123
+ "transformer.h.14.ln_2.weight": "model-00001-of-00002.safetensors",
124
+ "transformer.h.14.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
125
+ "transformer.h.14.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
126
+ "transformer.h.14.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
127
+ "transformer.h.14.mlp.w1.SCB": "model-00001-of-00002.safetensors",
128
+ "transformer.h.14.mlp.w1.weight": "model-00001-of-00002.safetensors",
129
+ "transformer.h.14.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
130
+ "transformer.h.14.mlp.w2.SCB": "model-00001-of-00002.safetensors",
131
+ "transformer.h.14.mlp.w2.weight": "model-00001-of-00002.safetensors",
132
+ "transformer.h.14.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
133
+ "transformer.h.15.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
134
+ "transformer.h.15.attn.c_attn.bias": "model-00001-of-00002.safetensors",
135
+ "transformer.h.15.attn.c_attn.weight": "model-00001-of-00002.safetensors",
136
+ "transformer.h.15.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
137
+ "transformer.h.15.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
138
+ "transformer.h.15.attn.c_proj.weight": "model-00001-of-00002.safetensors",
139
+ "transformer.h.15.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
140
+ "transformer.h.15.ln_1.weight": "model-00001-of-00002.safetensors",
141
+ "transformer.h.15.ln_2.weight": "model-00001-of-00002.safetensors",
142
+ "transformer.h.15.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
143
+ "transformer.h.15.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
144
+ "transformer.h.15.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
145
+ "transformer.h.15.mlp.w1.SCB": "model-00001-of-00002.safetensors",
146
+ "transformer.h.15.mlp.w1.weight": "model-00001-of-00002.safetensors",
147
+ "transformer.h.15.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
148
+ "transformer.h.15.mlp.w2.SCB": "model-00001-of-00002.safetensors",
149
+ "transformer.h.15.mlp.w2.weight": "model-00001-of-00002.safetensors",
150
+ "transformer.h.15.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
151
+ "transformer.h.16.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
152
+ "transformer.h.16.attn.c_attn.bias": "model-00001-of-00002.safetensors",
153
+ "transformer.h.16.attn.c_attn.weight": "model-00001-of-00002.safetensors",
154
+ "transformer.h.16.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
155
+ "transformer.h.16.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
156
+ "transformer.h.16.attn.c_proj.weight": "model-00001-of-00002.safetensors",
157
+ "transformer.h.16.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
158
+ "transformer.h.16.ln_1.weight": "model-00001-of-00002.safetensors",
159
+ "transformer.h.16.ln_2.weight": "model-00001-of-00002.safetensors",
160
+ "transformer.h.16.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
161
+ "transformer.h.16.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
162
+ "transformer.h.16.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
163
+ "transformer.h.16.mlp.w1.SCB": "model-00001-of-00002.safetensors",
164
+ "transformer.h.16.mlp.w1.weight": "model-00001-of-00002.safetensors",
165
+ "transformer.h.16.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
166
+ "transformer.h.16.mlp.w2.SCB": "model-00001-of-00002.safetensors",
167
+ "transformer.h.16.mlp.w2.weight": "model-00001-of-00002.safetensors",
168
+ "transformer.h.16.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
169
+ "transformer.h.17.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
170
+ "transformer.h.17.attn.c_attn.bias": "model-00001-of-00002.safetensors",
171
+ "transformer.h.17.attn.c_attn.weight": "model-00001-of-00002.safetensors",
172
+ "transformer.h.17.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
173
+ "transformer.h.17.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
174
+ "transformer.h.17.attn.c_proj.weight": "model-00001-of-00002.safetensors",
175
+ "transformer.h.17.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
176
+ "transformer.h.17.ln_1.weight": "model-00001-of-00002.safetensors",
177
+ "transformer.h.17.ln_2.weight": "model-00001-of-00002.safetensors",
178
+ "transformer.h.17.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
179
+ "transformer.h.17.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
180
+ "transformer.h.17.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
181
+ "transformer.h.17.mlp.w1.SCB": "model-00001-of-00002.safetensors",
182
+ "transformer.h.17.mlp.w1.weight": "model-00001-of-00002.safetensors",
183
+ "transformer.h.17.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
184
+ "transformer.h.17.mlp.w2.SCB": "model-00001-of-00002.safetensors",
185
+ "transformer.h.17.mlp.w2.weight": "model-00001-of-00002.safetensors",
186
+ "transformer.h.17.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
187
+ "transformer.h.18.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
188
+ "transformer.h.18.attn.c_attn.bias": "model-00001-of-00002.safetensors",
189
+ "transformer.h.18.attn.c_attn.weight": "model-00001-of-00002.safetensors",
190
+ "transformer.h.18.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
191
+ "transformer.h.18.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
192
+ "transformer.h.18.attn.c_proj.weight": "model-00001-of-00002.safetensors",
193
+ "transformer.h.18.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
194
+ "transformer.h.18.ln_1.weight": "model-00001-of-00002.safetensors",
195
+ "transformer.h.18.ln_2.weight": "model-00001-of-00002.safetensors",
196
+ "transformer.h.18.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
197
+ "transformer.h.18.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
198
+ "transformer.h.18.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
199
+ "transformer.h.18.mlp.w1.SCB": "model-00002-of-00002.safetensors",
200
+ "transformer.h.18.mlp.w1.weight": "model-00002-of-00002.safetensors",
201
+ "transformer.h.18.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
202
+ "transformer.h.18.mlp.w2.SCB": "model-00002-of-00002.safetensors",
203
+ "transformer.h.18.mlp.w2.weight": "model-00002-of-00002.safetensors",
204
+ "transformer.h.18.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
205
+ "transformer.h.19.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
206
+ "transformer.h.19.attn.c_attn.bias": "model-00002-of-00002.safetensors",
207
+ "transformer.h.19.attn.c_attn.weight": "model-00002-of-00002.safetensors",
208
+ "transformer.h.19.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
209
+ "transformer.h.19.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
210
+ "transformer.h.19.attn.c_proj.weight": "model-00002-of-00002.safetensors",
211
+ "transformer.h.19.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
212
+ "transformer.h.19.ln_1.weight": "model-00002-of-00002.safetensors",
213
+ "transformer.h.19.ln_2.weight": "model-00002-of-00002.safetensors",
214
+ "transformer.h.19.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
215
+ "transformer.h.19.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
216
+ "transformer.h.19.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
217
+ "transformer.h.19.mlp.w1.SCB": "model-00002-of-00002.safetensors",
218
+ "transformer.h.19.mlp.w1.weight": "model-00002-of-00002.safetensors",
219
+ "transformer.h.19.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
220
+ "transformer.h.19.mlp.w2.SCB": "model-00002-of-00002.safetensors",
221
+ "transformer.h.19.mlp.w2.weight": "model-00002-of-00002.safetensors",
222
+ "transformer.h.19.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
223
+ "transformer.h.2.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
224
+ "transformer.h.2.attn.c_attn.bias": "model-00001-of-00002.safetensors",
225
+ "transformer.h.2.attn.c_attn.weight": "model-00001-of-00002.safetensors",
226
+ "transformer.h.2.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
227
+ "transformer.h.2.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
228
+ "transformer.h.2.attn.c_proj.weight": "model-00001-of-00002.safetensors",
229
+ "transformer.h.2.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
230
+ "transformer.h.2.ln_1.weight": "model-00001-of-00002.safetensors",
231
+ "transformer.h.2.ln_2.weight": "model-00001-of-00002.safetensors",
232
+ "transformer.h.2.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
233
+ "transformer.h.2.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
234
+ "transformer.h.2.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
235
+ "transformer.h.2.mlp.w1.SCB": "model-00001-of-00002.safetensors",
236
+ "transformer.h.2.mlp.w1.weight": "model-00001-of-00002.safetensors",
237
+ "transformer.h.2.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
238
+ "transformer.h.2.mlp.w2.SCB": "model-00001-of-00002.safetensors",
239
+ "transformer.h.2.mlp.w2.weight": "model-00001-of-00002.safetensors",
240
+ "transformer.h.2.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
241
+ "transformer.h.20.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
242
+ "transformer.h.20.attn.c_attn.bias": "model-00002-of-00002.safetensors",
243
+ "transformer.h.20.attn.c_attn.weight": "model-00002-of-00002.safetensors",
244
+ "transformer.h.20.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
245
+ "transformer.h.20.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
246
+ "transformer.h.20.attn.c_proj.weight": "model-00002-of-00002.safetensors",
247
+ "transformer.h.20.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
248
+ "transformer.h.20.ln_1.weight": "model-00002-of-00002.safetensors",
249
+ "transformer.h.20.ln_2.weight": "model-00002-of-00002.safetensors",
250
+ "transformer.h.20.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
251
+ "transformer.h.20.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
252
+ "transformer.h.20.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
253
+ "transformer.h.20.mlp.w1.SCB": "model-00002-of-00002.safetensors",
254
+ "transformer.h.20.mlp.w1.weight": "model-00002-of-00002.safetensors",
255
+ "transformer.h.20.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
256
+ "transformer.h.20.mlp.w2.SCB": "model-00002-of-00002.safetensors",
257
+ "transformer.h.20.mlp.w2.weight": "model-00002-of-00002.safetensors",
258
+ "transformer.h.20.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
259
+ "transformer.h.21.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
260
+ "transformer.h.21.attn.c_attn.bias": "model-00002-of-00002.safetensors",
261
+ "transformer.h.21.attn.c_attn.weight": "model-00002-of-00002.safetensors",
262
+ "transformer.h.21.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
263
+ "transformer.h.21.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
264
+ "transformer.h.21.attn.c_proj.weight": "model-00002-of-00002.safetensors",
265
+ "transformer.h.21.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
266
+ "transformer.h.21.ln_1.weight": "model-00002-of-00002.safetensors",
267
+ "transformer.h.21.ln_2.weight": "model-00002-of-00002.safetensors",
268
+ "transformer.h.21.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
269
+ "transformer.h.21.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
270
+ "transformer.h.21.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
271
+ "transformer.h.21.mlp.w1.SCB": "model-00002-of-00002.safetensors",
272
+ "transformer.h.21.mlp.w1.weight": "model-00002-of-00002.safetensors",
273
+ "transformer.h.21.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
274
+ "transformer.h.21.mlp.w2.SCB": "model-00002-of-00002.safetensors",
275
+ "transformer.h.21.mlp.w2.weight": "model-00002-of-00002.safetensors",
276
+ "transformer.h.21.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
277
+ "transformer.h.22.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
278
+ "transformer.h.22.attn.c_attn.bias": "model-00002-of-00002.safetensors",
279
+ "transformer.h.22.attn.c_attn.weight": "model-00002-of-00002.safetensors",
280
+ "transformer.h.22.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
281
+ "transformer.h.22.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
282
+ "transformer.h.22.attn.c_proj.weight": "model-00002-of-00002.safetensors",
283
+ "transformer.h.22.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
284
+ "transformer.h.22.ln_1.weight": "model-00002-of-00002.safetensors",
285
+ "transformer.h.22.ln_2.weight": "model-00002-of-00002.safetensors",
286
+ "transformer.h.22.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
287
+ "transformer.h.22.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
288
+ "transformer.h.22.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
289
+ "transformer.h.22.mlp.w1.SCB": "model-00002-of-00002.safetensors",
290
+ "transformer.h.22.mlp.w1.weight": "model-00002-of-00002.safetensors",
291
+ "transformer.h.22.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
292
+ "transformer.h.22.mlp.w2.SCB": "model-00002-of-00002.safetensors",
293
+ "transformer.h.22.mlp.w2.weight": "model-00002-of-00002.safetensors",
294
+ "transformer.h.22.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
295
+ "transformer.h.23.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
296
+ "transformer.h.23.attn.c_attn.bias": "model-00002-of-00002.safetensors",
297
+ "transformer.h.23.attn.c_attn.weight": "model-00002-of-00002.safetensors",
298
+ "transformer.h.23.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
299
+ "transformer.h.23.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
300
+ "transformer.h.23.attn.c_proj.weight": "model-00002-of-00002.safetensors",
301
+ "transformer.h.23.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
302
+ "transformer.h.23.ln_1.weight": "model-00002-of-00002.safetensors",
303
+ "transformer.h.23.ln_2.weight": "model-00002-of-00002.safetensors",
304
+ "transformer.h.23.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
305
+ "transformer.h.23.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
306
+ "transformer.h.23.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
307
+ "transformer.h.23.mlp.w1.SCB": "model-00002-of-00002.safetensors",
308
+ "transformer.h.23.mlp.w1.weight": "model-00002-of-00002.safetensors",
309
+ "transformer.h.23.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
310
+ "transformer.h.23.mlp.w2.SCB": "model-00002-of-00002.safetensors",
311
+ "transformer.h.23.mlp.w2.weight": "model-00002-of-00002.safetensors",
312
+ "transformer.h.23.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
313
+ "transformer.h.24.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
314
+ "transformer.h.24.attn.c_attn.bias": "model-00002-of-00002.safetensors",
315
+ "transformer.h.24.attn.c_attn.weight": "model-00002-of-00002.safetensors",
316
+ "transformer.h.24.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
317
+ "transformer.h.24.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
318
+ "transformer.h.24.attn.c_proj.weight": "model-00002-of-00002.safetensors",
319
+ "transformer.h.24.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
320
+ "transformer.h.24.ln_1.weight": "model-00002-of-00002.safetensors",
321
+ "transformer.h.24.ln_2.weight": "model-00002-of-00002.safetensors",
322
+ "transformer.h.24.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
323
+ "transformer.h.24.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
324
+ "transformer.h.24.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
325
+ "transformer.h.24.mlp.w1.SCB": "model-00002-of-00002.safetensors",
326
+ "transformer.h.24.mlp.w1.weight": "model-00002-of-00002.safetensors",
327
+ "transformer.h.24.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
328
+ "transformer.h.24.mlp.w2.SCB": "model-00002-of-00002.safetensors",
329
+ "transformer.h.24.mlp.w2.weight": "model-00002-of-00002.safetensors",
330
+ "transformer.h.24.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
331
+ "transformer.h.25.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
332
+ "transformer.h.25.attn.c_attn.bias": "model-00002-of-00002.safetensors",
333
+ "transformer.h.25.attn.c_attn.weight": "model-00002-of-00002.safetensors",
334
+ "transformer.h.25.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
335
+ "transformer.h.25.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
336
+ "transformer.h.25.attn.c_proj.weight": "model-00002-of-00002.safetensors",
337
+ "transformer.h.25.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
338
+ "transformer.h.25.ln_1.weight": "model-00002-of-00002.safetensors",
339
+ "transformer.h.25.ln_2.weight": "model-00002-of-00002.safetensors",
340
+ "transformer.h.25.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
341
+ "transformer.h.25.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
342
+ "transformer.h.25.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
343
+ "transformer.h.25.mlp.w1.SCB": "model-00002-of-00002.safetensors",
344
+ "transformer.h.25.mlp.w1.weight": "model-00002-of-00002.safetensors",
345
+ "transformer.h.25.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
346
+ "transformer.h.25.mlp.w2.SCB": "model-00002-of-00002.safetensors",
347
+ "transformer.h.25.mlp.w2.weight": "model-00002-of-00002.safetensors",
348
+ "transformer.h.25.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
349
+ "transformer.h.26.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
350
+ "transformer.h.26.attn.c_attn.bias": "model-00002-of-00002.safetensors",
351
+ "transformer.h.26.attn.c_attn.weight": "model-00002-of-00002.safetensors",
352
+ "transformer.h.26.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
353
+ "transformer.h.26.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
354
+ "transformer.h.26.attn.c_proj.weight": "model-00002-of-00002.safetensors",
355
+ "transformer.h.26.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
356
+ "transformer.h.26.ln_1.weight": "model-00002-of-00002.safetensors",
357
+ "transformer.h.26.ln_2.weight": "model-00002-of-00002.safetensors",
358
+ "transformer.h.26.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
359
+ "transformer.h.26.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
360
+ "transformer.h.26.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
361
+ "transformer.h.26.mlp.w1.SCB": "model-00002-of-00002.safetensors",
362
+ "transformer.h.26.mlp.w1.weight": "model-00002-of-00002.safetensors",
363
+ "transformer.h.26.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
364
+ "transformer.h.26.mlp.w2.SCB": "model-00002-of-00002.safetensors",
365
+ "transformer.h.26.mlp.w2.weight": "model-00002-of-00002.safetensors",
366
+ "transformer.h.26.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
367
+ "transformer.h.27.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
368
+ "transformer.h.27.attn.c_attn.bias": "model-00002-of-00002.safetensors",
369
+ "transformer.h.27.attn.c_attn.weight": "model-00002-of-00002.safetensors",
370
+ "transformer.h.27.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
371
+ "transformer.h.27.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
372
+ "transformer.h.27.attn.c_proj.weight": "model-00002-of-00002.safetensors",
373
+ "transformer.h.27.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
374
+ "transformer.h.27.ln_1.weight": "model-00002-of-00002.safetensors",
375
+ "transformer.h.27.ln_2.weight": "model-00002-of-00002.safetensors",
376
+ "transformer.h.27.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
377
+ "transformer.h.27.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
378
+ "transformer.h.27.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
379
+ "transformer.h.27.mlp.w1.SCB": "model-00002-of-00002.safetensors",
380
+ "transformer.h.27.mlp.w1.weight": "model-00002-of-00002.safetensors",
381
+ "transformer.h.27.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
382
+ "transformer.h.27.mlp.w2.SCB": "model-00002-of-00002.safetensors",
383
+ "transformer.h.27.mlp.w2.weight": "model-00002-of-00002.safetensors",
384
+ "transformer.h.27.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
385
+ "transformer.h.28.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
386
+ "transformer.h.28.attn.c_attn.bias": "model-00002-of-00002.safetensors",
387
+ "transformer.h.28.attn.c_attn.weight": "model-00002-of-00002.safetensors",
388
+ "transformer.h.28.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
389
+ "transformer.h.28.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
390
+ "transformer.h.28.attn.c_proj.weight": "model-00002-of-00002.safetensors",
391
+ "transformer.h.28.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
392
+ "transformer.h.28.ln_1.weight": "model-00002-of-00002.safetensors",
393
+ "transformer.h.28.ln_2.weight": "model-00002-of-00002.safetensors",
394
+ "transformer.h.28.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
395
+ "transformer.h.28.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
396
+ "transformer.h.28.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
397
+ "transformer.h.28.mlp.w1.SCB": "model-00002-of-00002.safetensors",
398
+ "transformer.h.28.mlp.w1.weight": "model-00002-of-00002.safetensors",
399
+ "transformer.h.28.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
400
+ "transformer.h.28.mlp.w2.SCB": "model-00002-of-00002.safetensors",
401
+ "transformer.h.28.mlp.w2.weight": "model-00002-of-00002.safetensors",
402
+ "transformer.h.28.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
403
+ "transformer.h.29.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
404
+ "transformer.h.29.attn.c_attn.bias": "model-00002-of-00002.safetensors",
405
+ "transformer.h.29.attn.c_attn.weight": "model-00002-of-00002.safetensors",
406
+ "transformer.h.29.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
407
+ "transformer.h.29.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
408
+ "transformer.h.29.attn.c_proj.weight": "model-00002-of-00002.safetensors",
409
+ "transformer.h.29.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
410
+ "transformer.h.29.ln_1.weight": "model-00002-of-00002.safetensors",
411
+ "transformer.h.29.ln_2.weight": "model-00002-of-00002.safetensors",
412
+ "transformer.h.29.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
413
+ "transformer.h.29.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
414
+ "transformer.h.29.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
415
+ "transformer.h.29.mlp.w1.SCB": "model-00002-of-00002.safetensors",
416
+ "transformer.h.29.mlp.w1.weight": "model-00002-of-00002.safetensors",
417
+ "transformer.h.29.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
418
+ "transformer.h.29.mlp.w2.SCB": "model-00002-of-00002.safetensors",
419
+ "transformer.h.29.mlp.w2.weight": "model-00002-of-00002.safetensors",
420
+ "transformer.h.29.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
421
+ "transformer.h.3.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
422
+ "transformer.h.3.attn.c_attn.bias": "model-00001-of-00002.safetensors",
423
+ "transformer.h.3.attn.c_attn.weight": "model-00001-of-00002.safetensors",
424
+ "transformer.h.3.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
425
+ "transformer.h.3.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
426
+ "transformer.h.3.attn.c_proj.weight": "model-00001-of-00002.safetensors",
427
+ "transformer.h.3.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
428
+ "transformer.h.3.ln_1.weight": "model-00001-of-00002.safetensors",
429
+ "transformer.h.3.ln_2.weight": "model-00001-of-00002.safetensors",
430
+ "transformer.h.3.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
431
+ "transformer.h.3.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
432
+ "transformer.h.3.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
433
+ "transformer.h.3.mlp.w1.SCB": "model-00001-of-00002.safetensors",
434
+ "transformer.h.3.mlp.w1.weight": "model-00001-of-00002.safetensors",
435
+ "transformer.h.3.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
436
+ "transformer.h.3.mlp.w2.SCB": "model-00001-of-00002.safetensors",
437
+ "transformer.h.3.mlp.w2.weight": "model-00001-of-00002.safetensors",
438
+ "transformer.h.3.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
439
+ "transformer.h.30.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
440
+ "transformer.h.30.attn.c_attn.bias": "model-00002-of-00002.safetensors",
441
+ "transformer.h.30.attn.c_attn.weight": "model-00002-of-00002.safetensors",
442
+ "transformer.h.30.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
443
+ "transformer.h.30.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
444
+ "transformer.h.30.attn.c_proj.weight": "model-00002-of-00002.safetensors",
445
+ "transformer.h.30.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
446
+ "transformer.h.30.ln_1.weight": "model-00002-of-00002.safetensors",
447
+ "transformer.h.30.ln_2.weight": "model-00002-of-00002.safetensors",
448
+ "transformer.h.30.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
449
+ "transformer.h.30.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
450
+ "transformer.h.30.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
451
+ "transformer.h.30.mlp.w1.SCB": "model-00002-of-00002.safetensors",
452
+ "transformer.h.30.mlp.w1.weight": "model-00002-of-00002.safetensors",
453
+ "transformer.h.30.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
454
+ "transformer.h.30.mlp.w2.SCB": "model-00002-of-00002.safetensors",
455
+ "transformer.h.30.mlp.w2.weight": "model-00002-of-00002.safetensors",
456
+ "transformer.h.30.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
457
+ "transformer.h.31.attn.c_attn.SCB": "model-00002-of-00002.safetensors",
458
+ "transformer.h.31.attn.c_attn.bias": "model-00002-of-00002.safetensors",
459
+ "transformer.h.31.attn.c_attn.weight": "model-00002-of-00002.safetensors",
460
+ "transformer.h.31.attn.c_attn.weight_format": "model-00002-of-00002.safetensors",
461
+ "transformer.h.31.attn.c_proj.SCB": "model-00002-of-00002.safetensors",
462
+ "transformer.h.31.attn.c_proj.weight": "model-00002-of-00002.safetensors",
463
+ "transformer.h.31.attn.c_proj.weight_format": "model-00002-of-00002.safetensors",
464
+ "transformer.h.31.ln_1.weight": "model-00002-of-00002.safetensors",
465
+ "transformer.h.31.ln_2.weight": "model-00002-of-00002.safetensors",
466
+ "transformer.h.31.mlp.c_proj.SCB": "model-00002-of-00002.safetensors",
467
+ "transformer.h.31.mlp.c_proj.weight": "model-00002-of-00002.safetensors",
468
+ "transformer.h.31.mlp.c_proj.weight_format": "model-00002-of-00002.safetensors",
469
+ "transformer.h.31.mlp.w1.SCB": "model-00002-of-00002.safetensors",
470
+ "transformer.h.31.mlp.w1.weight": "model-00002-of-00002.safetensors",
471
+ "transformer.h.31.mlp.w1.weight_format": "model-00002-of-00002.safetensors",
472
+ "transformer.h.31.mlp.w2.SCB": "model-00002-of-00002.safetensors",
473
+ "transformer.h.31.mlp.w2.weight": "model-00002-of-00002.safetensors",
474
+ "transformer.h.31.mlp.w2.weight_format": "model-00002-of-00002.safetensors",
475
+ "transformer.h.4.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
476
+ "transformer.h.4.attn.c_attn.bias": "model-00001-of-00002.safetensors",
477
+ "transformer.h.4.attn.c_attn.weight": "model-00001-of-00002.safetensors",
478
+ "transformer.h.4.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
479
+ "transformer.h.4.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
480
+ "transformer.h.4.attn.c_proj.weight": "model-00001-of-00002.safetensors",
481
+ "transformer.h.4.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
482
+ "transformer.h.4.ln_1.weight": "model-00001-of-00002.safetensors",
483
+ "transformer.h.4.ln_2.weight": "model-00001-of-00002.safetensors",
484
+ "transformer.h.4.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
485
+ "transformer.h.4.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
486
+ "transformer.h.4.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
487
+ "transformer.h.4.mlp.w1.SCB": "model-00001-of-00002.safetensors",
488
+ "transformer.h.4.mlp.w1.weight": "model-00001-of-00002.safetensors",
489
+ "transformer.h.4.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
490
+ "transformer.h.4.mlp.w2.SCB": "model-00001-of-00002.safetensors",
491
+ "transformer.h.4.mlp.w2.weight": "model-00001-of-00002.safetensors",
492
+ "transformer.h.4.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
493
+ "transformer.h.5.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
494
+ "transformer.h.5.attn.c_attn.bias": "model-00001-of-00002.safetensors",
495
+ "transformer.h.5.attn.c_attn.weight": "model-00001-of-00002.safetensors",
496
+ "transformer.h.5.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
497
+ "transformer.h.5.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
498
+ "transformer.h.5.attn.c_proj.weight": "model-00001-of-00002.safetensors",
499
+ "transformer.h.5.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
500
+ "transformer.h.5.ln_1.weight": "model-00001-of-00002.safetensors",
501
+ "transformer.h.5.ln_2.weight": "model-00001-of-00002.safetensors",
502
+ "transformer.h.5.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
503
+ "transformer.h.5.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
504
+ "transformer.h.5.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
505
+ "transformer.h.5.mlp.w1.SCB": "model-00001-of-00002.safetensors",
506
+ "transformer.h.5.mlp.w1.weight": "model-00001-of-00002.safetensors",
507
+ "transformer.h.5.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
508
+ "transformer.h.5.mlp.w2.SCB": "model-00001-of-00002.safetensors",
509
+ "transformer.h.5.mlp.w2.weight": "model-00001-of-00002.safetensors",
510
+ "transformer.h.5.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
511
+ "transformer.h.6.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
512
+ "transformer.h.6.attn.c_attn.bias": "model-00001-of-00002.safetensors",
513
+ "transformer.h.6.attn.c_attn.weight": "model-00001-of-00002.safetensors",
514
+ "transformer.h.6.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
515
+ "transformer.h.6.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
516
+ "transformer.h.6.attn.c_proj.weight": "model-00001-of-00002.safetensors",
517
+ "transformer.h.6.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
518
+ "transformer.h.6.ln_1.weight": "model-00001-of-00002.safetensors",
519
+ "transformer.h.6.ln_2.weight": "model-00001-of-00002.safetensors",
520
+ "transformer.h.6.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
521
+ "transformer.h.6.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
522
+ "transformer.h.6.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
523
+ "transformer.h.6.mlp.w1.SCB": "model-00001-of-00002.safetensors",
524
+ "transformer.h.6.mlp.w1.weight": "model-00001-of-00002.safetensors",
525
+ "transformer.h.6.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
526
+ "transformer.h.6.mlp.w2.SCB": "model-00001-of-00002.safetensors",
527
+ "transformer.h.6.mlp.w2.weight": "model-00001-of-00002.safetensors",
528
+ "transformer.h.6.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
529
+ "transformer.h.7.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
530
+ "transformer.h.7.attn.c_attn.bias": "model-00001-of-00002.safetensors",
531
+ "transformer.h.7.attn.c_attn.weight": "model-00001-of-00002.safetensors",
532
+ "transformer.h.7.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
533
+ "transformer.h.7.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
534
+ "transformer.h.7.attn.c_proj.weight": "model-00001-of-00002.safetensors",
535
+ "transformer.h.7.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
536
+ "transformer.h.7.ln_1.weight": "model-00001-of-00002.safetensors",
537
+ "transformer.h.7.ln_2.weight": "model-00001-of-00002.safetensors",
538
+ "transformer.h.7.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
539
+ "transformer.h.7.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
540
+ "transformer.h.7.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
541
+ "transformer.h.7.mlp.w1.SCB": "model-00001-of-00002.safetensors",
542
+ "transformer.h.7.mlp.w1.weight": "model-00001-of-00002.safetensors",
543
+ "transformer.h.7.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
544
+ "transformer.h.7.mlp.w2.SCB": "model-00001-of-00002.safetensors",
545
+ "transformer.h.7.mlp.w2.weight": "model-00001-of-00002.safetensors",
546
+ "transformer.h.7.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
547
+ "transformer.h.8.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
548
+ "transformer.h.8.attn.c_attn.bias": "model-00001-of-00002.safetensors",
549
+ "transformer.h.8.attn.c_attn.weight": "model-00001-of-00002.safetensors",
550
+ "transformer.h.8.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
551
+ "transformer.h.8.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
552
+ "transformer.h.8.attn.c_proj.weight": "model-00001-of-00002.safetensors",
553
+ "transformer.h.8.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
554
+ "transformer.h.8.ln_1.weight": "model-00001-of-00002.safetensors",
555
+ "transformer.h.8.ln_2.weight": "model-00001-of-00002.safetensors",
556
+ "transformer.h.8.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
557
+ "transformer.h.8.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
558
+ "transformer.h.8.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
559
+ "transformer.h.8.mlp.w1.SCB": "model-00001-of-00002.safetensors",
560
+ "transformer.h.8.mlp.w1.weight": "model-00001-of-00002.safetensors",
561
+ "transformer.h.8.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
562
+ "transformer.h.8.mlp.w2.SCB": "model-00001-of-00002.safetensors",
563
+ "transformer.h.8.mlp.w2.weight": "model-00001-of-00002.safetensors",
564
+ "transformer.h.8.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
565
+ "transformer.h.9.attn.c_attn.SCB": "model-00001-of-00002.safetensors",
566
+ "transformer.h.9.attn.c_attn.bias": "model-00001-of-00002.safetensors",
567
+ "transformer.h.9.attn.c_attn.weight": "model-00001-of-00002.safetensors",
568
+ "transformer.h.9.attn.c_attn.weight_format": "model-00001-of-00002.safetensors",
569
+ "transformer.h.9.attn.c_proj.SCB": "model-00001-of-00002.safetensors",
570
+ "transformer.h.9.attn.c_proj.weight": "model-00001-of-00002.safetensors",
571
+ "transformer.h.9.attn.c_proj.weight_format": "model-00001-of-00002.safetensors",
572
+ "transformer.h.9.ln_1.weight": "model-00001-of-00002.safetensors",
573
+ "transformer.h.9.ln_2.weight": "model-00001-of-00002.safetensors",
574
+ "transformer.h.9.mlp.c_proj.SCB": "model-00001-of-00002.safetensors",
575
+ "transformer.h.9.mlp.c_proj.weight": "model-00001-of-00002.safetensors",
576
+ "transformer.h.9.mlp.c_proj.weight_format": "model-00001-of-00002.safetensors",
577
+ "transformer.h.9.mlp.w1.SCB": "model-00001-of-00002.safetensors",
578
+ "transformer.h.9.mlp.w1.weight": "model-00001-of-00002.safetensors",
579
+ "transformer.h.9.mlp.w1.weight_format": "model-00001-of-00002.safetensors",
580
+ "transformer.h.9.mlp.w2.SCB": "model-00001-of-00002.safetensors",
581
+ "transformer.h.9.mlp.w2.weight": "model-00001-of-00002.safetensors",
582
+ "transformer.h.9.mlp.w2.weight_format": "model-00001-of-00002.safetensors",
583
+ "transformer.ln_f.weight": "model-00002-of-00002.safetensors",
584
+ "transformer.wte.weight": "model-00001-of-00002.safetensors"
585
+ }
586
+ }
modeling_qwen.py ADDED
@@ -0,0 +1,1188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ import importlib
7
+ import math
8
+ from typing import TYPE_CHECKING, Optional, Tuple, Union, Callable, List, Any, Generator
9
+
10
+ import torch
11
+ import torch.nn.functional as F
12
+ import torch.utils.checkpoint
13
+ from torch.cuda.amp import autocast
14
+
15
+ from torch.nn import CrossEntropyLoss
16
+ from transformers import PreTrainedTokenizer, GenerationConfig, StoppingCriteriaList
17
+ from transformers.generation.logits_process import LogitsProcessorList
18
+
19
+ if TYPE_CHECKING:
20
+ from transformers.generation.streamers import BaseStreamer
21
+ from transformers.generation.utils import GenerateOutput
22
+ from transformers.modeling_outputs import (
23
+ BaseModelOutputWithPast,
24
+ CausalLMOutputWithPast,
25
+ )
26
+ from transformers.modeling_utils import PreTrainedModel
27
+ from transformers.utils import logging
28
+
29
+ try:
30
+ from einops import rearrange
31
+ except ImportError:
32
+ rearrange = None
33
+ from torch import nn
34
+
35
+ SUPPORT_CUDA = torch.cuda.is_available()
36
+ SUPPORT_BF16 = SUPPORT_CUDA and torch.cuda.is_bf16_supported()
37
+ SUPPORT_FP16 = SUPPORT_CUDA and torch.cuda.get_device_capability(0)[0] >= 7
38
+
39
+ from .configuration_qwen import QWenConfig
40
+ from .qwen_generation_utils import (
41
+ HistoryType,
42
+ make_context,
43
+ decode_tokens,
44
+ get_stop_words_ids,
45
+ StopWordsLogitsProcessor,
46
+ )
47
+
48
+ from transformers.models.llama.modeling_llama import LlamaRotaryEmbedding, apply_rotary_pos_emb
49
+
50
+
51
+ logger = logging.get_logger(__name__)
52
+
53
+ _CHECKPOINT_FOR_DOC = "qwen"
54
+ _CONFIG_FOR_DOC = "QWenConfig"
55
+
56
+ QWen_PRETRAINED_MODEL_ARCHIVE_LIST = ["qwen-7b"]
57
+
58
+ _ERROR_BAD_CHAT_FORMAT = """\
59
+ We detect you are probably using the pretrained model (rather than chat model) for chatting, since the chat_format in generation_config is not "chatml".
60
+ If you are directly using the model downloaded from Huggingface, please make sure you are using our "Qwen/Qwen-7B-Chat" Huggingface model (rather than "Qwen/Qwen-7B") when you call model.chat().
61
+ 我们检测到您可能在使用预训练模型(而非chat模型)进行多轮chat,因为您当前在generation_config指定的chat_format,并未设置为我们在对话中所支持的"chatml"格式。
62
+ 如果您在直接使用我们从Huggingface提供的模型,请确保您在调用model.chat()时,使用的是"Qwen/Qwen-7B-Chat"模型(而非"Qwen/Qwen-7B"预训练模型)。
63
+ """
64
+
65
+ _SENTINEL = object()
66
+ _ERROR_STREAM_IN_CHAT = """\
67
+ Pass argument `stream` to model.chat() is buggy, deprecated, and marked for removal. Please use model.chat_stream(...) instead of model.chat(..., stream=True).
68
+ 向model.chat()传入参数stream的用法可能存在Bug,该用法已被废弃,将在未来被移除。请使用model.chat_stream(...)代替model.chat(..., stream=True)。
69
+ """
70
+
71
+ _ERROR_INPUT_CPU_QUERY_WITH_FLASH_ATTN_ACTIVATED = """\
72
+ We detect you have activated flash attention support, but running model computation on CPU. Please make sure that your input data has been placed on GPU. If you actually want to run CPU computation, please following the readme and set device_map="cpu" to disable flash attention when loading the model (calling AutoModelForCausalLM.from_pretrained).
73
+ 检测到您的模型已激活了flash attention支持,但正在执行CPU运算任务。如使用flash attention,请您确认模型输入已经传到GPU上。如果您确认要执行CPU运算,请您在载入模型(调用AutoModelForCausalLM.from_pretrained)时,按照readme说法,指定device_map="cpu"以禁用flash attention。
74
+ """
75
+
76
+ apply_rotary_emb_func = None
77
+ rms_norm = None
78
+ flash_attn_unpadded_func = None
79
+
80
+
81
+ def _import_flash_attn():
82
+ global apply_rotary_emb_func, rms_norm, flash_attn_unpadded_func
83
+ try:
84
+ from flash_attn.layers.rotary import apply_rotary_emb_func as __apply_rotary_emb_func
85
+ apply_rotary_emb_func = __apply_rotary_emb_func
86
+ except ImportError:
87
+ logger.warn(
88
+ "Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency "
89
+ "https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary"
90
+ )
91
+
92
+ try:
93
+ from flash_attn.ops.rms_norm import rms_norm as __rms_norm
94
+ rms_norm = __rms_norm
95
+ except ImportError:
96
+ logger.warn(
97
+ "Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency "
98
+ "https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm"
99
+ )
100
+
101
+ try:
102
+ import flash_attn
103
+ if not hasattr(flash_attn, '__version__'):
104
+ from flash_attn.flash_attn_interface import flash_attn_unpadded_func as __flash_attn_unpadded_func
105
+ else:
106
+ if int(flash_attn.__version__.split(".")[0]) >= 2:
107
+ from flash_attn.flash_attn_interface import flash_attn_varlen_func as __flash_attn_unpadded_func
108
+ else:
109
+ from flash_attn.flash_attn_interface import flash_attn_unpadded_func as __flash_attn_unpadded_func
110
+ flash_attn_unpadded_func = __flash_attn_unpadded_func
111
+ except ImportError:
112
+ logger.warn(
113
+ "Warning: import flash_attn fail, please install FlashAttention to get higher efficiency "
114
+ "https://github.com/Dao-AILab/flash-attention"
115
+ )
116
+
117
+
118
+ class FlashSelfAttention(torch.nn.Module):
119
+ def __init__(
120
+ self,
121
+ causal=False,
122
+ softmax_scale=None,
123
+ attention_dropout=0.0,
124
+ ):
125
+ super().__init__()
126
+ assert flash_attn_unpadded_func is not None, (
127
+ "Please install FlashAttention first, " "e.g., with pip install flash-attn"
128
+ )
129
+ assert (
130
+ rearrange is not None
131
+ ), "Please install einops first, e.g., with pip install einops"
132
+ self.causal = causal
133
+ self.softmax_scale = softmax_scale
134
+ self.dropout_p = attention_dropout
135
+
136
+ def forward(self, q, k, v):
137
+ assert all((i.dtype in [torch.float16, torch.bfloat16] for i in (q, k, v)))
138
+ assert all((i.is_cuda for i in (q, k, v)))
139
+ batch_size, seqlen_q = q.shape[0], q.shape[1]
140
+ seqlen_k = k.shape[1]
141
+
142
+ q, k, v = [rearrange(x, "b s ... -> (b s) ...") for x in [q, k, v]]
143
+ cu_seqlens_q = torch.arange(
144
+ 0,
145
+ (batch_size + 1) * seqlen_q,
146
+ step=seqlen_q,
147
+ dtype=torch.int32,
148
+ device=q.device,
149
+ )
150
+
151
+ if self.training:
152
+ assert seqlen_k == seqlen_q
153
+
154
+ is_causal = self.causal
155
+ cu_seqlens_k = cu_seqlens_q
156
+ else:
157
+ is_causal = seqlen_q == seqlen_k
158
+ cu_seqlens_k = torch.arange(
159
+ 0,
160
+ (batch_size + 1) * seqlen_k,
161
+ step=seqlen_k,
162
+ dtype=torch.int32,
163
+ device=q.device,
164
+ )
165
+ self.dropout_p = 0
166
+
167
+ output = flash_attn_unpadded_func(
168
+ q,
169
+ k,
170
+ v,
171
+ cu_seqlens_q,
172
+ cu_seqlens_k,
173
+ seqlen_q,
174
+ seqlen_k,
175
+ self.dropout_p,
176
+ softmax_scale=self.softmax_scale,
177
+ causal=is_causal,
178
+ )
179
+
180
+ new_shape = (batch_size, output.shape[0] // batch_size) + output.shape[1:]
181
+ output = output.view(new_shape)
182
+ return output
183
+
184
+
185
+ class QWenAttention(nn.Module):
186
+ def __init__(self, config):
187
+ super().__init__()
188
+
189
+ self.register_buffer("masked_bias", torch.tensor(-1e4), persistent=False)
190
+ self.seq_length = config.seq_length
191
+
192
+ self.hidden_size = config.hidden_size
193
+ self.split_size = config.hidden_size
194
+ self.num_heads = config.num_attention_heads
195
+ self.head_dim = self.hidden_size // self.num_heads
196
+
197
+ self.use_flash_attn = config.use_flash_attn
198
+ self.scale_attn_weights = True
199
+
200
+ self.projection_size = config.kv_channels * config.num_attention_heads
201
+
202
+ assert self.projection_size % config.num_attention_heads == 0
203
+ self.hidden_size_per_attention_head = (
204
+ self.projection_size // config.num_attention_heads
205
+ )
206
+
207
+ self.c_attn = nn.Linear(config.hidden_size, 3 * self.projection_size)
208
+
209
+ self.c_proj = nn.Linear(
210
+ config.hidden_size, self.projection_size, bias=not config.no_bias
211
+ )
212
+
213
+ self.is_fp32 = not (config.bf16 or config.fp16)
214
+ if (
215
+ self.use_flash_attn
216
+ and flash_attn_unpadded_func is not None
217
+ and not self.is_fp32
218
+ ):
219
+ self.core_attention_flash = FlashSelfAttention(
220
+ causal=True, attention_dropout=config.attn_dropout_prob
221
+ )
222
+ self.bf16 = config.bf16
223
+
224
+ self.use_dynamic_ntk = config.use_dynamic_ntk
225
+ self.use_logn_attn = config.use_logn_attn
226
+
227
+ logn_list = [
228
+ math.log(i, self.seq_length) if i > self.seq_length else 1
229
+ for i in range(1, 32768)
230
+ ]
231
+ self.logn_tensor = torch.tensor(logn_list)[None, :, None, None]
232
+
233
+ self.attn_dropout = nn.Dropout(config.attn_dropout_prob)
234
+
235
+ def _attn(self, query, key, value, registered_causal_mask, attention_mask=None, head_mask=None):
236
+ attn_weights = torch.matmul(query, key.transpose(-1, -2))
237
+
238
+ if self.scale_attn_weights:
239
+ attn_weights = attn_weights / torch.full(
240
+ [],
241
+ value.size(-1) ** 0.5,
242
+ dtype=attn_weights.dtype,
243
+ device=attn_weights.device,
244
+ )
245
+
246
+ query_length, key_length = query.size(-2), key.size(-2)
247
+ causal_mask = registered_causal_mask[
248
+ :, :, key_length - query_length : key_length, :key_length
249
+ ]
250
+ mask_value = torch.finfo(attn_weights.dtype).min
251
+ mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(
252
+ attn_weights.device
253
+ )
254
+ attn_weights = torch.where(
255
+ causal_mask, attn_weights.to(attn_weights.dtype), mask_value
256
+ )
257
+
258
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
259
+
260
+ attn_weights = attn_weights.type(value.dtype)
261
+ attn_weights = self.attn_dropout(attn_weights)
262
+
263
+ if head_mask is not None:
264
+ attn_weights = attn_weights * head_mask
265
+
266
+ attn_output = torch.matmul(attn_weights, value)
267
+ attn_output = attn_output.transpose(1, 2)
268
+
269
+ return attn_output, attn_weights
270
+
271
+ def _upcast_and_reordered_attn(
272
+ self, query, key, value, registered_causal_mask, attention_mask=None, head_mask=None
273
+ ):
274
+ bsz, num_heads, q_seq_len, dk = query.size()
275
+ _, _, k_seq_len, _ = key.size()
276
+
277
+ attn_weights = torch.empty(
278
+ bsz * num_heads,
279
+ q_seq_len,
280
+ k_seq_len,
281
+ dtype=torch.float32,
282
+ device=query.device,
283
+ )
284
+
285
+ scale_factor = 1.0
286
+ if self.scale_attn_weights:
287
+ scale_factor /= float(value.size(-1)) ** 0.5
288
+
289
+ with autocast(enabled=False):
290
+ q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(
291
+ -1, dk, k_seq_len
292
+ )
293
+ attn_weights = torch.baddbmm(
294
+ attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor
295
+ )
296
+ attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len)
297
+
298
+ query_length, key_length = query.size(-2), key.size(-2)
299
+ causal_mask = registered_causal_mask[
300
+ :, :, key_length - query_length : key_length, :key_length
301
+ ]
302
+ mask_value = torch.finfo(attn_weights.dtype).min
303
+ mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(
304
+ attn_weights.device
305
+ )
306
+ attn_weights = torch.where(causal_mask, attn_weights, mask_value)
307
+
308
+ if attention_mask is not None:
309
+ attn_weights = attn_weights + attention_mask
310
+
311
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
312
+
313
+ if attn_weights.dtype != torch.float32:
314
+ raise RuntimeError(
315
+ "Error with upcasting, attn_weights does not have dtype torch.float32"
316
+ )
317
+ attn_weights = attn_weights.type(value.dtype)
318
+ attn_weights = self.attn_dropout(attn_weights)
319
+
320
+ if head_mask is not None:
321
+ attn_weights = attn_weights * head_mask
322
+
323
+ attn_output = torch.matmul(attn_weights, value)
324
+
325
+ return attn_output, attn_weights
326
+
327
+ def _split_heads(self, tensor, num_heads, attn_head_size):
328
+ new_shape = tensor.size()[:-1] + (num_heads, attn_head_size)
329
+ tensor = tensor.view(new_shape)
330
+ return tensor
331
+
332
+ def _merge_heads(self, tensor, num_heads, attn_head_size):
333
+ tensor = tensor.contiguous()
334
+ new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,)
335
+ return tensor.view(new_shape)
336
+
337
+ def forward(
338
+ self,
339
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
340
+ rotary_pos_emb: Optional[List[torch.Tensor]] = None,
341
+ registered_causal_mask: Optional[torch.Tensor] = None,
342
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
343
+ attention_mask: Optional[torch.FloatTensor] = None,
344
+ head_mask: Optional[torch.FloatTensor] = None,
345
+ position_ids: Optional[torch.LongTensor] = None,
346
+ encoder_hidden_states: Optional[torch.Tensor] = None,
347
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
348
+ output_attentions: Optional[bool] = False,
349
+ use_cache: Optional[bool] = False,
350
+ ):
351
+
352
+ mixed_x_layer = self.c_attn(hidden_states)
353
+
354
+ query, key, value = mixed_x_layer.split(self.split_size, dim=2)
355
+
356
+ query = self._split_heads(query, self.num_heads, self.head_dim)
357
+ key = self._split_heads(key, self.num_heads, self.head_dim)
358
+ value = self._split_heads(value, self.num_heads, self.head_dim)
359
+
360
+ kv_seq_len = key.shape[-2]
361
+
362
+ if rotary_pos_emb is not None:
363
+ cur_len = query.shape[1]
364
+ # Slice the pos emb for current inference
365
+ cos, sin = rotary_pos_emb
366
+ query, key = apply_rotary_pos_emb(query, key, cos, sin, position_ids, unsqueeze_dim=2)
367
+
368
+ if layer_past is not None:
369
+ past_key, past_value = layer_past[0], layer_past[1]
370
+ key = torch.cat((past_key, key), dim=1)
371
+ value = torch.cat((past_value, value), dim=1)
372
+
373
+ if use_cache:
374
+ present = (key, value)
375
+ else:
376
+ present = None
377
+
378
+ if self.use_logn_attn and not self.training:
379
+ if self.logn_tensor.device != query.device or self.logn_tensor.dtype != query.dtype:
380
+ self.logn_tensor = self.logn_tensor.to(query.device).type_as(query)
381
+ seq_start = key.size(1) - query.size(1)
382
+ seq_end = key.size(1)
383
+ logn_tensor = self.logn_tensor[:, seq_start:seq_end, :, :]
384
+ query = query * logn_tensor.expand_as(query)
385
+
386
+ if (
387
+ self.use_flash_attn
388
+ and flash_attn_unpadded_func is not None
389
+ and not self.is_fp32
390
+ and query.is_cuda
391
+ ):
392
+ q, k, v = query, key, value
393
+ context_layer = self.core_attention_flash(q, k, v)
394
+
395
+ # b s h d -> b s (h d)
396
+ context_layer = context_layer.flatten(2,3).contiguous()
397
+
398
+ else:
399
+ query = query.permute(0, 2, 1, 3)
400
+ key = key.permute(0, 2, 1, 3)
401
+ value = value.permute(0, 2, 1, 3)
402
+ if (
403
+ registered_causal_mask is None
404
+ and self.use_flash_attn
405
+ and flash_attn_unpadded_func is not None
406
+ and not self.is_fp32
407
+ and not query.is_cuda
408
+ ):
409
+ raise Exception(_ERROR_INPUT_CPU_QUERY_WITH_FLASH_ATTN_ACTIVATED)
410
+ attn_output, attn_weight = self._attn(
411
+ query, key, value, registered_causal_mask, attention_mask, head_mask
412
+ )
413
+ context_layer = self._merge_heads(
414
+ attn_output, self.num_heads, self.head_dim
415
+ )
416
+
417
+ attn_output = self.c_proj(context_layer)
418
+
419
+ outputs = (attn_output, present)
420
+ if output_attentions:
421
+ if (
422
+ self.use_flash_attn
423
+ and flash_attn_unpadded_func is not None
424
+ and not self.is_fp32
425
+ ):
426
+ raise ValueError("Cannot output attentions while using flash-attn")
427
+ else:
428
+ outputs += (attn_weight,)
429
+
430
+ return outputs
431
+
432
+
433
+ class QWenMLP(nn.Module):
434
+ def __init__(self, config):
435
+ super().__init__()
436
+ self.w1 = nn.Linear(
437
+ config.hidden_size, config.intermediate_size // 2, bias=not config.no_bias
438
+ )
439
+ self.w2 = nn.Linear(
440
+ config.hidden_size, config.intermediate_size // 2, bias=not config.no_bias
441
+ )
442
+ ff_dim_in = config.intermediate_size // 2
443
+ self.c_proj = nn.Linear(ff_dim_in, config.hidden_size, bias=not config.no_bias)
444
+
445
+ def forward(self, hidden_states):
446
+ a1 = self.w1(hidden_states)
447
+ a2 = self.w2(hidden_states)
448
+ intermediate_parallel = a1 * F.silu(a2)
449
+ output = self.c_proj(intermediate_parallel)
450
+ return output
451
+
452
+ class QWenBlock(nn.Module):
453
+ def __init__(self, config):
454
+ super().__init__()
455
+ hidden_size = config.hidden_size
456
+ self.bf16 = config.bf16
457
+
458
+ self.ln_1 = RMSNorm(
459
+ hidden_size,
460
+ eps=config.layer_norm_epsilon,
461
+ )
462
+ self.attn = QWenAttention(config)
463
+ self.ln_2 = RMSNorm(
464
+ hidden_size,
465
+ eps=config.layer_norm_epsilon,
466
+ )
467
+
468
+ self.mlp = QWenMLP(config)
469
+
470
+ def forward(
471
+ self,
472
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
473
+ rotary_pos_emb: Optional[List[torch.Tensor]] = None,
474
+ registered_causal_mask: Optional[torch.Tensor] = None,
475
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
476
+ attention_mask: Optional[torch.FloatTensor] = None,
477
+ head_mask: Optional[torch.FloatTensor] = None,
478
+ position_ids: Optional[torch.LongTensor] = None,
479
+ encoder_hidden_states: Optional[torch.Tensor] = None,
480
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
481
+ use_cache: Optional[bool] = False,
482
+ output_attentions: Optional[bool] = False,
483
+ ):
484
+ layernorm_output = self.ln_1(hidden_states)
485
+
486
+ attn_outputs = self.attn(
487
+ layernorm_output,
488
+ rotary_pos_emb,
489
+ registered_causal_mask=registered_causal_mask,
490
+ layer_past=layer_past,
491
+ attention_mask=attention_mask,
492
+ head_mask=head_mask,
493
+ position_ids=position_ids,
494
+ use_cache=use_cache,
495
+ output_attentions=output_attentions,
496
+ )
497
+ attn_output = attn_outputs[0]
498
+
499
+ outputs = attn_outputs[1:]
500
+
501
+ residual = hidden_states
502
+ layernorm_input = attn_output + residual
503
+
504
+ layernorm_output = self.ln_2(layernorm_input)
505
+
506
+ residual = layernorm_input
507
+ mlp_output = self.mlp(layernorm_output)
508
+ hidden_states = residual + mlp_output
509
+
510
+ if use_cache:
511
+ outputs = (hidden_states,) + outputs
512
+ else:
513
+ outputs = (hidden_states,) + outputs[1:]
514
+
515
+ return outputs
516
+
517
+
518
+ class QWenPreTrainedModel(PreTrainedModel):
519
+ config_class = QWenConfig
520
+ base_model_prefix = "transformer"
521
+ is_parallelizable = False
522
+ supports_gradient_checkpointing = True
523
+ _no_split_modules = ["QWenBlock"]
524
+
525
+ def __init__(self, *inputs, **kwargs):
526
+ super().__init__(*inputs, **kwargs)
527
+
528
+ def _init_weights(self, module):
529
+ """Initialize the weights."""
530
+ if isinstance(module, nn.Linear):
531
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
532
+ if module.bias is not None:
533
+ module.bias.data.zero_()
534
+ elif isinstance(module, nn.Embedding):
535
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
536
+ if module.padding_idx is not None:
537
+ module.weight.data[module.padding_idx].zero_()
538
+ elif isinstance(module, RMSNorm):
539
+ module.weight.data.fill_(1.0)
540
+
541
+ for name, p in module.named_parameters():
542
+ if name == "c_proj.weight":
543
+ p.data.normal_(
544
+ mean=0.0,
545
+ std=(
546
+ self.config.initializer_range
547
+ / math.sqrt(2 * self.config.num_hidden_layers)
548
+ ),
549
+ )
550
+
551
+ # def _set_gradient_checkpointing(self, module, value=False):
552
+ # if isinstance(module, QWenModel):
553
+ # module.gradient_checkpointing = value
554
+
555
+ def _set_gradient_checkpointing(self, enable: bool = False, gradient_checkpointing_func: Optional[Callable] = None):
556
+ is_gradient_checkpointing_set = False
557
+
558
+ if isinstance(self, QWenModel):
559
+ self.gradient_checkpointing = enable
560
+ self._gradient_checkpointing_func = gradient_checkpointing_func
561
+ is_gradient_checkpointing_set = True
562
+
563
+ for module in self.modules():
564
+ if isinstance(module, QWenModel):
565
+ module.gradient_checkpointing = enable
566
+ module._gradient_checkpointing_func = gradient_checkpointing_func
567
+ is_gradient_checkpointing_set = True
568
+
569
+ if not is_gradient_checkpointing_set:
570
+ raise ValueError(f"{self.__class__.__name__} is not compatible with gradient checkpointing. Make sure all the architecture support it by setting a boolean attribute 'gradient_checkpointing' to modules of the model that uses checkpointing.")
571
+
572
+
573
+ class QWenModel(QWenPreTrainedModel):
574
+ _keys_to_ignore_on_load_missing = ["attn.masked_bias"]
575
+
576
+ def __init__(self, config):
577
+ super().__init__(config)
578
+ self.vocab_size = config.vocab_size
579
+ self.num_hidden_layers = config.num_hidden_layers
580
+ self.embed_dim = config.hidden_size
581
+
582
+ self.gradient_checkpointing = False
583
+ self.use_dynamic_ntk = config.use_dynamic_ntk
584
+ self.seq_length = config.seq_length
585
+
586
+ self.wte = nn.Embedding(self.vocab_size, self.embed_dim)
587
+
588
+ self.drop = nn.Dropout(config.emb_dropout_prob)
589
+
590
+ if config.rotary_pct == 1.0:
591
+ self.rotary_ndims = None
592
+ else:
593
+ assert config.rotary_pct < 1
594
+ self.rotary_ndims = int(
595
+ config.kv_channels * config.rotary_pct
596
+ )
597
+ dim = (
598
+ self.rotary_ndims
599
+ if self.rotary_ndims is not None
600
+ else config.kv_channels
601
+ )
602
+ self.rotary_emb = LlamaRotaryEmbedding(dim, base=config.rotary_emb_base)
603
+
604
+ self.use_flash_attn = config.use_flash_attn
605
+ self.is_fp32 = not (config.bf16 or config.fp16)
606
+ if (
607
+ self.use_flash_attn
608
+ and flash_attn_unpadded_func is not None
609
+ and not self.is_fp32
610
+ ):
611
+ self.registered_causal_mask = None
612
+ else:
613
+ max_positions = config.max_position_embeddings
614
+ self.register_buffer(
615
+ "registered_causal_mask",
616
+ torch.tril(
617
+ torch.ones((max_positions, max_positions), dtype=torch.bool)
618
+ ).view(1, 1, max_positions, max_positions),
619
+ persistent=False,
620
+ )
621
+
622
+ self.h = nn.ModuleList(
623
+ [
624
+ QWenBlock(
625
+ config
626
+ )
627
+ for i in range(config.num_hidden_layers)
628
+ ]
629
+ )
630
+ self.ln_f = RMSNorm(
631
+ self.embed_dim,
632
+ eps=config.layer_norm_epsilon,
633
+ )
634
+
635
+ self.post_init()
636
+
637
+ def get_input_embeddings(self):
638
+ return self.wte
639
+
640
+ def set_input_embeddings(self, new_embeddings):
641
+ self.wte = new_embeddings
642
+
643
+ def forward(
644
+ self,
645
+ input_ids: Optional[torch.LongTensor] = None,
646
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
647
+ attention_mask: Optional[torch.FloatTensor] = None,
648
+ token_type_ids: Optional[torch.LongTensor] = None,
649
+ position_ids: Optional[torch.LongTensor] = None,
650
+ head_mask: Optional[torch.FloatTensor] = None,
651
+ inputs_embeds: Optional[torch.FloatTensor] = None,
652
+ encoder_hidden_states: Optional[torch.Tensor] = None,
653
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
654
+ use_cache: Optional[bool] = None,
655
+ output_attentions: Optional[bool] = None,
656
+ output_hidden_states: Optional[bool] = None,
657
+ return_dict: Optional[bool] = None,
658
+ ):
659
+ output_attentions = (
660
+ output_attentions
661
+ if output_attentions is not None
662
+ else self.config.output_attentions
663
+ )
664
+ output_hidden_states = (
665
+ output_hidden_states
666
+ if output_hidden_states is not None
667
+ else self.config.output_hidden_states
668
+ )
669
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
670
+ return_dict = (
671
+ return_dict if return_dict is not None else self.config.use_return_dict
672
+ )
673
+
674
+ if input_ids is not None and inputs_embeds is not None:
675
+ raise ValueError(
676
+ "You cannot specify both input_ids and inputs_embeds at the same time"
677
+ )
678
+ elif input_ids is not None:
679
+ input_shape = input_ids.size()
680
+ input_ids = input_ids.view(-1, input_shape[-1])
681
+ batch_size = input_ids.shape[0]
682
+ elif inputs_embeds is not None:
683
+ input_shape = inputs_embeds.size()[:-1]
684
+ batch_size = inputs_embeds.shape[0]
685
+ else:
686
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
687
+
688
+
689
+
690
+ if token_type_ids is not None:
691
+ token_type_ids = token_type_ids.view(-1, input_shape[-1])
692
+
693
+ if past_key_values is None:
694
+ past_length = 0
695
+ past_key_values = tuple([None] * len(self.h))
696
+ else:
697
+ past_length = past_key_values[0][0].size(-3)
698
+
699
+ if position_ids is None:
700
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
701
+ position_ids = torch.arange(
702
+ past_length,
703
+ input_shape[-1] + past_length,
704
+ dtype=torch.long,
705
+ device=device,
706
+ )
707
+ position_ids = position_ids.unsqueeze(0)
708
+
709
+ if attention_mask is not None:
710
+ if batch_size <= 0:
711
+ raise ValueError("batch_size has to be defined and > 0")
712
+ attention_mask = attention_mask.view(batch_size, -1)
713
+ attention_mask = attention_mask[:, None, None, :]
714
+ attention_mask = attention_mask.to(dtype=self.dtype)
715
+ attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min
716
+
717
+ encoder_attention_mask = None
718
+ head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
719
+
720
+ if inputs_embeds is None:
721
+ inputs_embeds = self.wte(input_ids)
722
+ hidden_states = inputs_embeds
723
+
724
+ kv_seq_len = hidden_states.size()[1]
725
+ if past_key_values[0] is not None:
726
+ # past key values[0][0] shape: bs * seq_len * head_num * dim
727
+ kv_seq_len += past_key_values[0][0].shape[1]
728
+ # if (
729
+ # self.use_dynamic_ntk
730
+ # and kv_seq_len == hidden_states.size()[1]
731
+ # and not self.training
732
+ # ):
733
+ # context_value = math.log(kv_seq_len / self.seq_length, 2) + 1
734
+ # ntk_alpha = 2 ** math.ceil(context_value) - 1
735
+ # ntk_alpha = max(ntk_alpha, 1)
736
+ # else:
737
+ # ntk_alpha = self.rotary_emb._ntk_alpha_cached
738
+
739
+ rotary_pos_emb = self.rotary_emb(hidden_states, kv_seq_len)
740
+
741
+ hidden_states = self.drop(hidden_states)
742
+ output_shape = input_shape + (hidden_states.size(-1),)
743
+
744
+ if self.gradient_checkpointing and self.training:
745
+ if use_cache:
746
+ logger.warning_once(
747
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
748
+ )
749
+ use_cache = False
750
+
751
+ presents = () if use_cache else None
752
+ all_self_attentions = () if output_attentions else None
753
+ all_hidden_states = () if output_hidden_states else None
754
+ for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
755
+
756
+ if output_hidden_states:
757
+ all_hidden_states = all_hidden_states + (hidden_states,)
758
+
759
+ if self.gradient_checkpointing and self.training:
760
+
761
+ def create_custom_forward(module):
762
+ def custom_forward(*inputs):
763
+ # None for past_key_value
764
+ return module(*inputs, use_cache, output_attentions)
765
+
766
+ return custom_forward
767
+
768
+ outputs = torch.utils.checkpoint.checkpoint(
769
+ create_custom_forward(block),
770
+ hidden_states,
771
+ rotary_pos_emb,
772
+ self.registered_causal_mask,
773
+ None,
774
+ attention_mask,
775
+ head_mask[i],
776
+ position_ids,
777
+ encoder_hidden_states,
778
+ encoder_attention_mask,
779
+ )
780
+ else:
781
+ outputs = block(
782
+ hidden_states,
783
+ rotary_pos_emb=rotary_pos_emb,
784
+ registered_causal_mask=self.registered_causal_mask,
785
+ layer_past=layer_past,
786
+ attention_mask=attention_mask,
787
+ head_mask=head_mask[i],
788
+ position_ids=position_ids,
789
+ encoder_hidden_states=encoder_hidden_states,
790
+ encoder_attention_mask=encoder_attention_mask,
791
+ use_cache=use_cache,
792
+ output_attentions=output_attentions,
793
+ )
794
+
795
+ hidden_states = outputs[0]
796
+ if use_cache is True:
797
+ presents = presents + (outputs[1],)
798
+
799
+ if output_attentions:
800
+ all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)
801
+
802
+ hidden_states = self.ln_f(hidden_states)
803
+ hidden_states = hidden_states.view(output_shape)
804
+ # Add last hidden state
805
+ if output_hidden_states:
806
+ all_hidden_states = all_hidden_states + (hidden_states,)
807
+
808
+ if not return_dict:
809
+ return tuple(
810
+ v for v in [hidden_states, presents, all_hidden_states] if v is not None
811
+ )
812
+
813
+ return BaseModelOutputWithPast(
814
+ last_hidden_state=hidden_states,
815
+ past_key_values=presents,
816
+ hidden_states=all_hidden_states,
817
+ attentions=all_self_attentions,
818
+ )
819
+
820
+
821
+ class QWenLMHeadModel(QWenPreTrainedModel):
822
+ _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.rotary_emb\.inv_freq"]
823
+ _keys_to_ignore_on_load_unexpected = [r"h\.\d+\.attn\.masked_bias"]
824
+
825
+ def __init__(self, config):
826
+ super().__init__(config)
827
+ assert (
828
+ config.bf16 + config.fp16 + config.fp32 <= 1
829
+ ), "Only one of \"bf16\", \"fp16\", \"fp32\" can be true"
830
+
831
+ autoset_precision = config.bf16 + config.fp16 + config.fp32 + (config.torch_dtype==torch.bfloat16) + (config.torch_dtype==torch.float16) + (config.torch_dtype==torch.float32) == 0
832
+
833
+ if autoset_precision:
834
+ if SUPPORT_BF16:
835
+ logger.warn(
836
+ "The model is automatically converting to bf16 for faster inference. "
837
+ "If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\"."
838
+ )
839
+ config.bf16 = True
840
+ elif SUPPORT_FP16:
841
+ logger.warn(
842
+ "The model is automatically converting to fp16 for faster inference. "
843
+ "If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\"."
844
+ )
845
+ config.fp16 = True
846
+ else:
847
+ config.fp32 = True
848
+
849
+ if config.bf16 and SUPPORT_CUDA and not SUPPORT_BF16:
850
+ logger.warn("Your device does NOT seem to support bf16, you can switch to fp16 or fp32 by by passing fp16/fp32=True in \"AutoModelForCausalLM.from_pretrained\".")
851
+ if config.fp16 and SUPPORT_CUDA and not SUPPORT_FP16:
852
+ logger.warn("Your device does NOT support faster inference with fp16, please switch to fp32 which is likely to be faster")
853
+ if config.fp32:
854
+ if SUPPORT_BF16:
855
+ logger.warn("Your device support faster inference by passing bf16=True in \"AutoModelForCausalLM.from_pretrained\".")
856
+ elif SUPPORT_FP16:
857
+ logger.warn("Your device support faster inference by passing fp16=True in \"AutoModelForCausalLM.from_pretrained\".")
858
+
859
+ if config.use_flash_attn == "auto":
860
+ if config.bf16 or config.fp16:
861
+ logger.warn("Try importing flash-attention for faster inference...")
862
+ config.use_flash_attn = True
863
+ else:
864
+ config.use_flash_attn = False
865
+ if config.use_flash_attn and config.fp32:
866
+ logger.warn("Flash attention will be disabled because it does NOT support fp32.")
867
+
868
+ if config.use_flash_attn:
869
+ _import_flash_attn()
870
+
871
+ self.transformer = QWenModel(config)
872
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
873
+ if hasattr(config, "use_token_ids"):
874
+ # 仅保留use_token_ids对应的logits, 其他置为负无穷
875
+ self.use_token_ids = config.use_token_ids
876
+ self.logits_mask = torch.sparse_coo_tensor(torch.as_tensor([config.use_token_ids]), torch.ones(len(config.use_token_ids), dtype=torch.int32), (config.vocab_size,)).to_dense().bool()
877
+ # 一定要用torch.as_tensor. torch.Tensor().long() 会创造float tensor再转成long, 导致溢出
878
+ # logits_mask = torch.zeros(len(config.use_token_ids)).bool()
879
+ # logits_mask.scatter_(0,torch.as_tensor(config.use_token_ids),True)
880
+ # self.logits_mask = logits_mask
881
+
882
+
883
+ if config.bf16:
884
+ self.transformer.bfloat16()
885
+ self.lm_head.bfloat16()
886
+ if config.fp16:
887
+ self.transformer.half()
888
+ self.lm_head.half()
889
+ self.post_init()
890
+
891
+ def get_output_embeddings(self):
892
+ return self.lm_head
893
+
894
+ def set_output_embeddings(self, new_embeddings):
895
+ self.lm_head = new_embeddings
896
+
897
+ def prepare_inputs_for_generation(
898
+ self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs
899
+ ):
900
+ token_type_ids = kwargs.get("token_type_ids", None)
901
+ if past_key_values:
902
+ input_ids = input_ids[:, -1].unsqueeze(-1)
903
+ if token_type_ids is not None:
904
+ token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
905
+
906
+ attention_mask = kwargs.get("attention_mask", None)
907
+ position_ids = kwargs.get("position_ids", None)
908
+
909
+ if inputs_embeds is not None and past_key_values is None:
910
+ model_inputs = {"inputs_embeds": inputs_embeds}
911
+ else:
912
+ model_inputs = {"input_ids": input_ids}
913
+
914
+ model_inputs.update(
915
+ {
916
+ "past_key_values": past_key_values,
917
+ "use_cache": kwargs.get("use_cache"),
918
+ "position_ids": position_ids,
919
+ "attention_mask": attention_mask,
920
+ "token_type_ids": token_type_ids,
921
+ }
922
+ )
923
+ return model_inputs
924
+
925
+ def forward(
926
+ self,
927
+ input_ids: Optional[torch.LongTensor] = None,
928
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
929
+ attention_mask: Optional[torch.FloatTensor] = None,
930
+ token_type_ids: Optional[torch.LongTensor] = None,
931
+ position_ids: Optional[torch.LongTensor] = None,
932
+ head_mask: Optional[torch.FloatTensor] = None,
933
+ inputs_embeds: Optional[torch.FloatTensor] = None,
934
+ encoder_hidden_states: Optional[torch.Tensor] = None,
935
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
936
+ labels: Optional[torch.LongTensor] = None,
937
+ use_cache: Optional[bool] = None,
938
+ output_attentions: Optional[bool] = None,
939
+ output_hidden_states: Optional[bool] = None,
940
+ return_dict: Optional[bool] = None,
941
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
942
+
943
+ return_dict = (
944
+ return_dict if return_dict is not None else self.config.use_return_dict
945
+ )
946
+
947
+ transformer_outputs = self.transformer(
948
+ input_ids,
949
+ past_key_values=past_key_values,
950
+ attention_mask=attention_mask,
951
+ token_type_ids=token_type_ids,
952
+ position_ids=position_ids,
953
+ head_mask=head_mask,
954
+ inputs_embeds=inputs_embeds,
955
+ encoder_hidden_states=encoder_hidden_states,
956
+ encoder_attention_mask=encoder_attention_mask,
957
+ use_cache=use_cache,
958
+ output_attentions=output_attentions,
959
+ output_hidden_states=output_hidden_states,
960
+ return_dict=return_dict,
961
+ )
962
+ hidden_states = transformer_outputs[0]
963
+
964
+ lm_logits = self.lm_head(hidden_states)
965
+ if hasattr(self,"logits_mask"):
966
+ # 仅保留use_token_ids对应的logits, 其他置为负无穷
967
+ mask_value = torch.finfo(lm_logits.dtype).min
968
+ lm_logits = torch.where(self.logits_mask.to(lm_logits.device), lm_logits, mask_value)
969
+
970
+ loss = None
971
+ if labels is not None:
972
+ labels = labels.to(lm_logits.device)
973
+ shift_logits = lm_logits[..., :-1, :].contiguous()
974
+ shift_labels = labels[..., 1:].contiguous()
975
+ loss_fct = CrossEntropyLoss()
976
+ loss = loss_fct(
977
+ shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
978
+ )
979
+
980
+ if not return_dict:
981
+ output = (lm_logits,) + transformer_outputs[1:]
982
+ return ((loss,) + output) if loss is not None else output
983
+
984
+ return CausalLMOutputWithPast(
985
+ loss=loss,
986
+ logits=lm_logits,
987
+ past_key_values=transformer_outputs.past_key_values,
988
+ hidden_states=transformer_outputs.hidden_states,
989
+ attentions=transformer_outputs.attentions,
990
+ )
991
+
992
+ @staticmethod
993
+ def _reorder_cache(
994
+ past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
995
+ ) -> Tuple[Tuple[torch.Tensor]]:
996
+
997
+ return tuple(
998
+ tuple(
999
+ past_state.index_select(0, beam_idx.to(past_state.device))
1000
+ for past_state in layer_past
1001
+ )
1002
+ for layer_past in past_key_values
1003
+ )
1004
+
1005
+ def chat(
1006
+ self,
1007
+ tokenizer: PreTrainedTokenizer,
1008
+ query: str,
1009
+ history: Optional[HistoryType],
1010
+ system: str = "You are a helpful assistant.",
1011
+ append_history: bool = True,
1012
+ stream: Optional[bool] = _SENTINEL,
1013
+ stop_words_ids: Optional[List[List[int]]] = None,
1014
+ generation_config: Optional[GenerationConfig] = None,
1015
+ **kwargs,
1016
+ ) -> Tuple[str, HistoryType]:
1017
+ generation_config = generation_config if generation_config is not None else self.generation_config
1018
+
1019
+ assert stream is _SENTINEL, _ERROR_STREAM_IN_CHAT
1020
+ assert generation_config.chat_format == 'chatml', _ERROR_BAD_CHAT_FORMAT
1021
+ if history is None:
1022
+ history = []
1023
+ if stop_words_ids is None:
1024
+ stop_words_ids = []
1025
+
1026
+ max_window_size = kwargs.get('max_window_size', None)
1027
+ if max_window_size is None:
1028
+ max_window_size = generation_config.max_window_size
1029
+ raw_text, context_tokens = make_context(
1030
+ tokenizer,
1031
+ query,
1032
+ history=history,
1033
+ system=system,
1034
+ max_window_size=max_window_size,
1035
+ chat_format=generation_config.chat_format,
1036
+ )
1037
+
1038
+ stop_words_ids.extend(get_stop_words_ids(
1039
+ generation_config.chat_format, tokenizer
1040
+ ))
1041
+ input_ids = torch.tensor([context_tokens]).to(self.device)
1042
+ outputs = self.generate(
1043
+ input_ids,
1044
+ stop_words_ids=stop_words_ids,
1045
+ return_dict_in_generate=False,
1046
+ generation_config=generation_config,
1047
+ **kwargs,
1048
+ )
1049
+
1050
+ response = decode_tokens(
1051
+ outputs[0],
1052
+ tokenizer,
1053
+ raw_text_len=len(raw_text),
1054
+ context_length=len(context_tokens),
1055
+ chat_format=generation_config.chat_format,
1056
+ verbose=False,
1057
+ errors='replace'
1058
+ )
1059
+
1060
+ if append_history:
1061
+ history.append((query, response))
1062
+
1063
+ return response, history
1064
+
1065
+ def chat_stream(
1066
+ self,
1067
+ tokenizer: PreTrainedTokenizer,
1068
+ query: str,
1069
+ history: Optional[HistoryType],
1070
+ system: str = "You are a helpful assistant.",
1071
+ stop_words_ids: Optional[List[List[int]]] = None,
1072
+ logits_processor: Optional[LogitsProcessorList] = None,
1073
+ generation_config: Optional[GenerationConfig] = None,
1074
+ **kwargs,
1075
+ ) -> Generator[str, Any, None]:
1076
+ generation_config = generation_config if generation_config is not None else self.generation_config
1077
+ assert generation_config.chat_format == 'chatml', _ERROR_BAD_CHAT_FORMAT
1078
+ if history is None:
1079
+ history = []
1080
+ if stop_words_ids is None:
1081
+ stop_words_ids = []
1082
+
1083
+ max_window_size = kwargs.get('max_window_size', None)
1084
+ if max_window_size is None:
1085
+ max_window_size = generation_config.max_window_size
1086
+ raw_text, context_tokens = make_context(
1087
+ tokenizer,
1088
+ query,
1089
+ history=history,
1090
+ system=system,
1091
+ max_window_size=max_window_size,
1092
+ chat_format=generation_config.chat_format,
1093
+ )
1094
+
1095
+ stop_words_ids.extend(get_stop_words_ids(
1096
+ generation_config.chat_format, tokenizer
1097
+ ))
1098
+ if stop_words_ids is not None:
1099
+ stop_words_logits_processor = StopWordsLogitsProcessor(
1100
+ stop_words_ids=stop_words_ids,
1101
+ eos_token_id=generation_config.eos_token_id,
1102
+ )
1103
+ if logits_processor is None:
1104
+ logits_processor = LogitsProcessorList([stop_words_logits_processor])
1105
+ else:
1106
+ logits_processor.append(stop_words_logits_processor)
1107
+ input_ids = torch.tensor([context_tokens]).to(self.device)
1108
+
1109
+ from transformers_stream_generator.main import NewGenerationMixin, StreamGenerationConfig
1110
+ self.__class__.generate_stream = NewGenerationMixin.generate
1111
+ self.__class__.sample_stream = NewGenerationMixin.sample_stream
1112
+ stream_config = StreamGenerationConfig(**generation_config.to_dict(), do_stream=True)
1113
+
1114
+ def stream_generator():
1115
+ outputs = []
1116
+ for token in self.generate_stream(
1117
+ input_ids,
1118
+ return_dict_in_generate=False,
1119
+ generation_config=stream_config,
1120
+ logits_processor=logits_processor,
1121
+ seed=-1,
1122
+ **kwargs):
1123
+ outputs.append(token.item())
1124
+ yield tokenizer.decode(outputs, skip_special_tokens=True, errors='ignore')
1125
+
1126
+ return stream_generator()
1127
+
1128
+ def generate(
1129
+ self,
1130
+ inputs: Optional[torch.Tensor] = None,
1131
+ generation_config: Optional[GenerationConfig] = None,
1132
+ logits_processor: Optional[LogitsProcessorList] = None,
1133
+ stopping_criteria: Optional[StoppingCriteriaList] = None,
1134
+ prefix_allowed_tokens_fn: Optional[
1135
+ Callable[[int, torch.Tensor], List[int]]
1136
+ ] = None,
1137
+ synced_gpus: Optional[bool] = None,
1138
+ assistant_model: Optional["PreTrainedModel"] = None,
1139
+ streamer: Optional["BaseStreamer"] = None,
1140
+ **kwargs,
1141
+ ) -> Union[GenerateOutput, torch.LongTensor]:
1142
+ generation_config = generation_config if generation_config is not None else self.generation_config
1143
+
1144
+ # Process stop_words_ids.
1145
+ stop_words_ids = kwargs.pop("stop_words_ids", None)
1146
+ if stop_words_ids is None and generation_config is not None:
1147
+ stop_words_ids = getattr(generation_config, "stop_words_ids", None)
1148
+ if stop_words_ids is None:
1149
+ stop_words_ids = getattr(generation_config, "stop_words_ids", None)
1150
+
1151
+ if stop_words_ids is not None:
1152
+ stop_words_logits_processor = StopWordsLogitsProcessor(
1153
+ stop_words_ids=stop_words_ids,
1154
+ eos_token_id=generation_config.eos_token_id,
1155
+ )
1156
+ if logits_processor is None:
1157
+ logits_processor = LogitsProcessorList([stop_words_logits_processor])
1158
+ else:
1159
+ logits_processor.append(stop_words_logits_processor)
1160
+
1161
+ return super().generate(
1162
+ inputs,
1163
+ generation_config=generation_config,
1164
+ logits_processor=logits_processor,
1165
+ stopping_criteria=stopping_criteria,
1166
+ prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
1167
+ synced_gpus=synced_gpus,
1168
+ assistant_model=assistant_model,
1169
+ streamer=streamer,
1170
+ **kwargs,
1171
+ )
1172
+
1173
+
1174
+ class RMSNorm(torch.nn.Module):
1175
+ def __init__(self, dim: int, eps: float = 1e-6):
1176
+ super().__init__()
1177
+ self.eps = eps
1178
+ self.weight = nn.Parameter(torch.ones(dim))
1179
+
1180
+ def _norm(self, x):
1181
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
1182
+
1183
+ def forward(self, x):
1184
+ if rms_norm is not None and x.is_cuda:
1185
+ return rms_norm(x, self.weight, self.eps)
1186
+ else:
1187
+ output = self._norm(x.float()).type_as(x)
1188
+ return output * self.weight
qwen_generation_utils.py ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Generation support."""
7
+
8
+ from typing import Tuple, List, Union, Iterable
9
+
10
+ import numpy as np
11
+ import torch
12
+ import torch.nn.functional as F
13
+ from transformers import PreTrainedTokenizer
14
+ from transformers import logging
15
+ from transformers.generation import LogitsProcessor
16
+
17
+ logger = logging.get_logger(__name__)
18
+
19
+ # Types.
20
+ HistoryType = List[Tuple[str, str]]
21
+ TokensType = List[int]
22
+ BatchTokensType = List[List[int]]
23
+
24
+
25
+ def pad_batch(batch: BatchTokensType, pad_id: int, seq_length: int) -> BatchTokensType:
26
+ for tokens in batch:
27
+ context_length = len(tokens)
28
+ if context_length < seq_length:
29
+ tokens.extend([pad_id] * (seq_length - context_length))
30
+ return batch
31
+
32
+
33
+ def get_ltor_masks_and_position_ids(
34
+ data,
35
+ eod_token,
36
+ reset_position_ids,
37
+ reset_attention_mask,
38
+ eod_mask_loss,
39
+ ):
40
+ """Build masks and position id for left to right model."""
41
+
42
+ # Extract batch size and sequence length.
43
+ micro_batch_size, seq_length = data.size()
44
+
45
+ # Attention mask (lower triangular).
46
+ if reset_attention_mask:
47
+ att_mask_batch = micro_batch_size
48
+ else:
49
+ att_mask_batch = 1
50
+ attention_mask = torch.tril(
51
+ torch.ones((att_mask_batch, seq_length, seq_length), device=data.device)
52
+ ).view(att_mask_batch, 1, seq_length, seq_length)
53
+
54
+ # Loss mask.
55
+ loss_mask = torch.ones(data.size(), dtype=torch.float, device=data.device)
56
+ if eod_mask_loss:
57
+ loss_mask[data == eod_token] = 0.0
58
+
59
+ # Position ids.
60
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=data.device)
61
+ position_ids = position_ids.unsqueeze(0).expand_as(data)
62
+ # We need to clone as the ids will be modifed based on batch index.
63
+ if reset_position_ids:
64
+ position_ids = position_ids.clone()
65
+
66
+ if reset_position_ids or reset_attention_mask:
67
+ # Loop through the batches:
68
+ for b in range(micro_batch_size):
69
+
70
+ # Find indecies where EOD token is.
71
+ eod_index = position_ids[b, data[b] == eod_token]
72
+ # Detach indecies from positions if going to modify positions.
73
+ if reset_position_ids:
74
+ eod_index = eod_index.clone()
75
+
76
+ # Loop through EOD indecies:
77
+ prev_index = 0
78
+ for j in range(eod_index.size()[0]):
79
+ i = eod_index[j]
80
+ # Mask attention loss.
81
+ if reset_attention_mask:
82
+ attention_mask[b, 0, (i + 1) :, : (i + 1)] = 0
83
+ # Reset positions.
84
+ if reset_position_ids:
85
+ position_ids[b, (i + 1) :] -= i + 1 - prev_index
86
+ prev_index = i + 1
87
+
88
+ # Convert attention mask to binary:
89
+ attention_mask = attention_mask < 0.5
90
+
91
+ return attention_mask, loss_mask, position_ids
92
+
93
+
94
+ def get_batch(context_tokens: torch.LongTensor, eod_id: int):
95
+ """Generate batch from context tokens."""
96
+ # Move to GPU.
97
+ tokens = context_tokens.contiguous().to(context_tokens.device)
98
+ # Get the attention mask and postition ids.
99
+ attention_mask, _, position_ids = get_ltor_masks_and_position_ids(
100
+ tokens,
101
+ eod_id,
102
+ reset_position_ids=False,
103
+ reset_attention_mask=False,
104
+ eod_mask_loss=False,
105
+ )
106
+ return tokens, attention_mask, position_ids
107
+
108
+
109
+ def get_stop_words_ids(chat_format, tokenizer):
110
+ if chat_format == "raw":
111
+ stop_words_ids = [tokenizer.encode("Human:"), [tokenizer.eod_id]]
112
+ elif chat_format == "chatml":
113
+ stop_words_ids = [[tokenizer.im_end_id], [tokenizer.im_start_id]]
114
+ else:
115
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
116
+ return stop_words_ids
117
+
118
+
119
+ def make_context(
120
+ tokenizer: PreTrainedTokenizer,
121
+ query: str,
122
+ history: List[Tuple[str, str]] = None,
123
+ system: str = "",
124
+ max_window_size: int = 6144,
125
+ chat_format: str = "chatml",
126
+ ):
127
+ if history is None:
128
+ history = []
129
+
130
+ if chat_format == "chatml":
131
+ im_start, im_end = "<|im_start|>", "<|im_end|>"
132
+ im_start_tokens = [tokenizer.im_start_id]
133
+ im_end_tokens = [tokenizer.im_end_id]
134
+ nl_tokens = tokenizer.encode("\n")
135
+
136
+ def _tokenize_str(role, content):
137
+ return f"{role}\n{content}", tokenizer.encode(
138
+ role, allowed_special=set()
139
+ ) + nl_tokens + tokenizer.encode(content, allowed_special=set())
140
+
141
+ system_text, system_tokens_part = _tokenize_str("system", system)
142
+ system_tokens = im_start_tokens + system_tokens_part + im_end_tokens
143
+
144
+ raw_text = ""
145
+ context_tokens = []
146
+
147
+ for turn_query, turn_response in reversed(history):
148
+ query_text, query_tokens_part = _tokenize_str("user", turn_query)
149
+ query_tokens = im_start_tokens + query_tokens_part + im_end_tokens
150
+ response_text, response_tokens_part = _tokenize_str(
151
+ "assistant", turn_response
152
+ )
153
+ response_tokens = im_start_tokens + response_tokens_part + im_end_tokens
154
+
155
+ next_context_tokens = nl_tokens + query_tokens + nl_tokens + response_tokens
156
+ prev_chat = (
157
+ f"\n{im_start}{query_text}{im_end}\n{im_start}{response_text}{im_end}"
158
+ )
159
+
160
+ current_context_size = (
161
+ len(system_tokens) + len(next_context_tokens) + len(context_tokens)
162
+ )
163
+ if current_context_size < max_window_size:
164
+ context_tokens = next_context_tokens + context_tokens
165
+ raw_text = prev_chat + raw_text
166
+ else:
167
+ break
168
+
169
+ context_tokens = system_tokens + context_tokens
170
+ raw_text = f"{im_start}{system_text}{im_end}" + raw_text
171
+ context_tokens += (
172
+ nl_tokens
173
+ + im_start_tokens
174
+ + _tokenize_str("user", query)[1]
175
+ + im_end_tokens
176
+ + nl_tokens
177
+ + im_start_tokens
178
+ + tokenizer.encode("assistant")
179
+ + nl_tokens
180
+ )
181
+ raw_text += f"\n{im_start}user\n{query}{im_end}\n{im_start}assistant\n"
182
+
183
+ elif chat_format == "raw":
184
+ raw_text = query
185
+ context_tokens = tokenizer.encode(raw_text)
186
+ else:
187
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
188
+
189
+ return raw_text, context_tokens
190
+
191
+
192
+ def _decode_default(
193
+ tokens: List[int],
194
+ *,
195
+ stop_words: List[str],
196
+ eod_words: List[str],
197
+ tokenizer: PreTrainedTokenizer,
198
+ raw_text_len: int,
199
+ verbose: bool = False,
200
+ return_end_reason: bool = False,
201
+ errors: str='replace',
202
+ ):
203
+ trim_decode_tokens = tokenizer.decode(tokens, errors=errors)[raw_text_len:]
204
+ if verbose:
205
+ print("\nRaw Generate: ", trim_decode_tokens)
206
+
207
+ end_reason = f"Gen length {len(tokens)}"
208
+ for stop_word in stop_words:
209
+ trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
210
+ for eod_word in eod_words:
211
+ if eod_word in trim_decode_tokens:
212
+ end_reason = f"Gen {eod_word!r}"
213
+ trim_decode_tokens = trim_decode_tokens.split(eod_word)[0]
214
+ trim_decode_tokens = trim_decode_tokens.strip()
215
+ if verbose:
216
+ print("\nEnd Reason:", end_reason)
217
+ print("\nGenerate: ", trim_decode_tokens)
218
+
219
+ if return_end_reason:
220
+ return trim_decode_tokens, end_reason
221
+ else:
222
+ return trim_decode_tokens
223
+
224
+
225
+ def _decode_chatml(
226
+ tokens: List[int],
227
+ *,
228
+ stop_words: List[str],
229
+ eod_token_ids: List[int],
230
+ tokenizer: PreTrainedTokenizer,
231
+ raw_text_len: int,
232
+ context_length: int,
233
+ verbose: bool = False,
234
+ return_end_reason: bool = False,
235
+ errors: str='replace'
236
+ ):
237
+ end_reason = f"Gen length {len(tokens)}"
238
+ eod_token_idx = context_length
239
+ for eod_token_idx in range(context_length, len(tokens)):
240
+ if tokens[eod_token_idx] in eod_token_ids:
241
+ end_reason = f"Gen {tokenizer.decode([tokens[eod_token_idx]])!r}"
242
+ break
243
+
244
+ trim_decode_tokens = tokenizer.decode(tokens[:eod_token_idx], errors=errors)[raw_text_len:]
245
+ if verbose:
246
+ print("\nRaw Generate w/o EOD:", tokenizer.decode(tokens, errors=errors)[raw_text_len:])
247
+ print("\nRaw Generate:", trim_decode_tokens)
248
+ print("\nEnd Reason:", end_reason)
249
+ for stop_word in stop_words:
250
+ trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
251
+ trim_decode_tokens = trim_decode_tokens.strip()
252
+ if verbose:
253
+ print("\nGenerate:", trim_decode_tokens)
254
+
255
+ if return_end_reason:
256
+ return trim_decode_tokens, end_reason
257
+ else:
258
+ return trim_decode_tokens
259
+
260
+
261
+ def decode_tokens(
262
+ tokens: Union[torch.LongTensor, TokensType],
263
+ tokenizer: PreTrainedTokenizer,
264
+ raw_text_len: int,
265
+ context_length: int,
266
+ chat_format: str,
267
+ verbose: bool = False,
268
+ return_end_reason: bool = False,
269
+ errors: str="replace",
270
+ ) -> str:
271
+ if torch.is_tensor(tokens):
272
+ tokens = tokens.cpu().numpy().tolist()
273
+
274
+ if chat_format == "chatml":
275
+ return _decode_chatml(
276
+ tokens,
277
+ stop_words=[],
278
+ eod_token_ids=[tokenizer.im_start_id, tokenizer.im_end_id],
279
+ tokenizer=tokenizer,
280
+ raw_text_len=raw_text_len,
281
+ context_length=context_length,
282
+ verbose=verbose,
283
+ return_end_reason=return_end_reason,
284
+ errors=errors,
285
+ )
286
+ elif chat_format == "raw":
287
+ return _decode_default(
288
+ tokens,
289
+ stop_words=["<|endoftext|>"],
290
+ eod_words=["<|endoftext|>"],
291
+ tokenizer=tokenizer,
292
+ raw_text_len=raw_text_len,
293
+ verbose=verbose,
294
+ return_end_reason=return_end_reason,
295
+ errors=errors,
296
+ )
297
+ else:
298
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
299
+
300
+
301
+ class StopWordsLogitsProcessor(LogitsProcessor):
302
+ """
303
+ :class:`transformers.LogitsProcessor` that enforces that when specified sequences appear, stop geration.
304
+
305
+ Args:
306
+ stop_words_ids (:obj:`List[List[int]]`):
307
+ List of list of token ids of stop ids. In order to get the tokens of the words
308
+ that should not appear in the generated text, use :obj:`tokenizer(bad_word,
309
+ add_prefix_space=True).input_ids`.
310
+ eos_token_id (:obj:`int`):
311
+ The id of the `end-of-sequence` token.
312
+ """
313
+
314
+ def __init__(self, stop_words_ids: Iterable[Iterable[int]], eos_token_id: int):
315
+
316
+ if not isinstance(stop_words_ids, List) or len(stop_words_ids) == 0:
317
+ raise ValueError(
318
+ f"`stop_words_ids` has to be a non-emtpy list, but is {stop_words_ids}."
319
+ )
320
+ if any(not isinstance(bad_word_ids, list) for bad_word_ids in stop_words_ids):
321
+ raise ValueError(
322
+ f"`stop_words_ids` has to be a list of lists, but is {stop_words_ids}."
323
+ )
324
+ if any(
325
+ any(
326
+ (not isinstance(token_id, (int, np.integer)) or token_id < 0)
327
+ for token_id in stop_word_ids
328
+ )
329
+ for stop_word_ids in stop_words_ids
330
+ ):
331
+ raise ValueError(
332
+ f"Each list in `stop_words_ids` has to be a list of positive integers, but is {stop_words_ids}."
333
+ )
334
+
335
+ self.stop_words_ids = list(
336
+ filter(
337
+ lambda bad_token_seq: bad_token_seq != [eos_token_id], stop_words_ids
338
+ )
339
+ )
340
+ self.eos_token_id = eos_token_id
341
+ for stop_token_seq in self.stop_words_ids:
342
+ assert (
343
+ len(stop_token_seq) > 0
344
+ ), "Stop words token sequences {} cannot have an empty list".format(
345
+ stop_words_ids
346
+ )
347
+
348
+ def __call__(
349
+ self, input_ids: torch.LongTensor, scores: torch.FloatTensor
350
+ ) -> torch.FloatTensor:
351
+ stopped_samples = self._calc_stopped_samples(input_ids)
352
+ for i, should_stop in enumerate(stopped_samples):
353
+ if should_stop:
354
+ scores[i, self.eos_token_id] = float(2**15)
355
+ return scores
356
+
357
+ def _tokens_match(self, prev_tokens: torch.LongTensor, tokens: List[int]) -> bool:
358
+ if len(tokens) == 0:
359
+ # if bad word tokens is just one token always ban it
360
+ return True
361
+ elif len(tokens) > len(prev_tokens):
362
+ # if bad word tokens are longer then prev input_ids they can't be equal
363
+ return False
364
+ elif prev_tokens[-len(tokens) :].tolist() == tokens:
365
+ # if tokens match
366
+ return True
367
+ else:
368
+ return False
369
+
370
+ def _calc_stopped_samples(self, prev_input_ids: Iterable[int]) -> Iterable[int]:
371
+ stopped_samples = []
372
+ for prev_input_ids_slice in prev_input_ids:
373
+ match = False
374
+ for stop_token_seq in self.stop_words_ids:
375
+ if self._tokens_match(prev_input_ids_slice, stop_token_seq):
376
+ # if tokens do not match continue
377
+ match = True
378
+ break
379
+ stopped_samples.append(match)
380
+
381
+ return stopped_samples
382
+
383
+
384
+ def top_k_logits(logits, top_k=0, top_p=0.0, filter_value=-float("Inf")):
385
+ """This function has been mostly taken from huggingface conversational
386
+ ai code at
387
+ https://medium.com/huggingface/how-to-build-a-state-of-the-art-
388
+ conversational-ai-with-transfer-learning-2d818ac26313"""
389
+
390
+ if top_k > 0:
391
+ # Remove all tokens with a probability less than the
392
+ # last token of the top-k
393
+ indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
394
+ logits[indices_to_remove] = filter_value
395
+
396
+ if top_p > 0.0:
397
+ # Cconvert to 1D
398
+ sorted_logits, sorted_indices = torch.sort(logits, descending=True, dim=-1)
399
+ cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
400
+
401
+ # Remove tokens with cumulative probability above the threshold
402
+ sorted_indices_to_remove = cumulative_probs > top_p
403
+ # Shift the indices to the right to keep also the first token
404
+ # above the threshold
405
+ sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
406
+ sorted_indices_to_remove[..., 0] = 0
407
+ for i in range(sorted_indices.size(0)):
408
+ indices_to_remove = sorted_indices[i][sorted_indices_to_remove[i]]
409
+ logits[i][indices_to_remove] = filter_value
410
+
411
+ return logits
412
+
413
+
414
+ def switch(val1, val2, boolean):
415
+ boolean = boolean.type_as(val1)
416
+ return (1 - boolean) * val1 + boolean * val2