pokaree commited on
Commit
c3b54a8
1 Parent(s): d349bb4

Upload Moondream

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "checkpoints/moondream-ft",
3
+ "architectures": [
4
+ "Moondream"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_moondream.MoondreamConfig",
8
+ "AutoModelForCausalLM": "moondream.Moondream"
9
+ },
10
+ "model_type": "moondream1",
11
+ "text_config": {
12
+ "model_type": "phi"
13
+ },
14
+ "torch_dtype": "float32",
15
+ "transformers_version": "4.41.2"
16
+ }
configuration_moondream.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+
3
+
4
+ class PhiConfig(PretrainedConfig):
5
+ model_type = "phi"
6
+ keys_to_ignore_at_inference = ["past_key_values"]
7
+
8
+ def __init__(
9
+ self,
10
+ vocab_size=51200,
11
+ hidden_size=2048,
12
+ intermediate_size=8192,
13
+ num_hidden_layers=24,
14
+ num_attention_heads=32,
15
+ num_key_value_heads=None,
16
+ resid_pdrop=0.0,
17
+ embd_pdrop=0.0,
18
+ attention_dropout=0.0,
19
+ hidden_act="gelu_new",
20
+ max_position_embeddings=2048,
21
+ initializer_range=0.02,
22
+ layer_norm_eps=1e-5,
23
+ use_cache=True,
24
+ tie_word_embeddings=False,
25
+ rope_theta=10000.0,
26
+ rope_scaling=None,
27
+ partial_rotary_factor=0.5,
28
+ qk_layernorm=False,
29
+ bos_token_id=1,
30
+ eos_token_id=2,
31
+ **kwargs,
32
+ ):
33
+ self.vocab_size = vocab_size
34
+ self.hidden_size = hidden_size
35
+ self.intermediate_size = intermediate_size
36
+ self.num_hidden_layers = num_hidden_layers
37
+ self.num_attention_heads = num_attention_heads
38
+
39
+ if num_key_value_heads is None:
40
+ num_key_value_heads = num_attention_heads
41
+
42
+ self.num_key_value_heads = num_key_value_heads
43
+ self.resid_pdrop = resid_pdrop
44
+ self.embd_pdrop = embd_pdrop
45
+ self.attention_dropout = attention_dropout
46
+ self.hidden_act = hidden_act
47
+ self.max_position_embeddings = max_position_embeddings
48
+ self.initializer_range = initializer_range
49
+ self.layer_norm_eps = layer_norm_eps
50
+ self.use_cache = use_cache
51
+ self.rope_theta = rope_theta
52
+ self.rope_scaling = rope_scaling
53
+ self.partial_rotary_factor = partial_rotary_factor
54
+ self.qk_layernorm = qk_layernorm
55
+ self._rope_scaling_validation()
56
+
57
+ super().__init__(
58
+ bos_token_id=bos_token_id,
59
+ eos_token_id=eos_token_id,
60
+ tie_word_embeddings=tie_word_embeddings,
61
+ **kwargs,
62
+ )
63
+
64
+ # Copied from transformers.models.llama.configuration_llama.LlamaConfig._rope_scaling_validation
65
+ def _rope_scaling_validation(self):
66
+ """
67
+ Validate the `rope_scaling` configuration.
68
+ """
69
+ if self.rope_scaling is None:
70
+ return
71
+
72
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
73
+ raise ValueError(
74
+ "`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
75
+ f"got {self.rope_scaling}"
76
+ )
77
+ rope_scaling_type = self.rope_scaling.get("type", None)
78
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
79
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
80
+ raise ValueError(
81
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
82
+ )
83
+ if (
84
+ rope_scaling_factor is None
85
+ or not isinstance(rope_scaling_factor, float)
86
+ or rope_scaling_factor <= 1.0
87
+ ):
88
+ raise ValueError(
89
+ f"`rope_scaling`'s factor field must be a float > 1, got {rope_scaling_factor}"
90
+ )
91
+
92
+
93
+ class MoondreamConfig(PretrainedConfig):
94
+ model_type = "moondream1"
95
+
96
+ def __init__(self, **kwargs):
97
+ self.text_config = PhiConfig(**kwargs.pop("text_config", {}))
98
+ super().__init__(**kwargs)
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.41.2"
6
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:583d627b0b464d0ea5a4fd8c661fe46402ace664600eae7b8aa05807d0afac32
3
+ size 4966787248
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3804a1cb42cf1afe00b8584301cf7875e9e01639d556ae104bd7fe71be95551e
3
+ size 2500964592
model.safetensors.index.json ADDED
@@ -0,0 +1,585 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 7467679168
4
+ },
5
+ "weight_map": {
6
+ "text_model.lm_head.linear.bias": "model-00002-of-00002.safetensors",
7
+ "text_model.lm_head.linear.weight": "model-00002-of-00002.safetensors",
8
+ "text_model.lm_head.ln.bias": "model-00002-of-00002.safetensors",
9
+ "text_model.lm_head.ln.weight": "model-00002-of-00002.safetensors",
10
+ "text_model.transformer.embd.wte.weight": "model-00001-of-00002.safetensors",
11
+ "text_model.transformer.h.0.ln.bias": "model-00001-of-00002.safetensors",
12
+ "text_model.transformer.h.0.ln.weight": "model-00001-of-00002.safetensors",
13
+ "text_model.transformer.h.0.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
14
+ "text_model.transformer.h.0.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
15
+ "text_model.transformer.h.0.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
16
+ "text_model.transformer.h.0.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
17
+ "text_model.transformer.h.0.mlp.fc1.bias": "model-00001-of-00002.safetensors",
18
+ "text_model.transformer.h.0.mlp.fc1.weight": "model-00001-of-00002.safetensors",
19
+ "text_model.transformer.h.0.mlp.fc2.bias": "model-00001-of-00002.safetensors",
20
+ "text_model.transformer.h.0.mlp.fc2.weight": "model-00001-of-00002.safetensors",
21
+ "text_model.transformer.h.1.ln.bias": "model-00001-of-00002.safetensors",
22
+ "text_model.transformer.h.1.ln.weight": "model-00001-of-00002.safetensors",
23
+ "text_model.transformer.h.1.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
24
+ "text_model.transformer.h.1.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
25
+ "text_model.transformer.h.1.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
26
+ "text_model.transformer.h.1.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
27
+ "text_model.transformer.h.1.mlp.fc1.bias": "model-00001-of-00002.safetensors",
28
+ "text_model.transformer.h.1.mlp.fc1.weight": "model-00001-of-00002.safetensors",
29
+ "text_model.transformer.h.1.mlp.fc2.bias": "model-00001-of-00002.safetensors",
30
+ "text_model.transformer.h.1.mlp.fc2.weight": "model-00001-of-00002.safetensors",
31
+ "text_model.transformer.h.10.ln.bias": "model-00001-of-00002.safetensors",
32
+ "text_model.transformer.h.10.ln.weight": "model-00001-of-00002.safetensors",
33
+ "text_model.transformer.h.10.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
34
+ "text_model.transformer.h.10.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
35
+ "text_model.transformer.h.10.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
36
+ "text_model.transformer.h.10.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
37
+ "text_model.transformer.h.10.mlp.fc1.bias": "model-00001-of-00002.safetensors",
38
+ "text_model.transformer.h.10.mlp.fc1.weight": "model-00001-of-00002.safetensors",
39
+ "text_model.transformer.h.10.mlp.fc2.bias": "model-00001-of-00002.safetensors",
40
+ "text_model.transformer.h.10.mlp.fc2.weight": "model-00001-of-00002.safetensors",
41
+ "text_model.transformer.h.11.ln.bias": "model-00001-of-00002.safetensors",
42
+ "text_model.transformer.h.11.ln.weight": "model-00001-of-00002.safetensors",
43
+ "text_model.transformer.h.11.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
44
+ "text_model.transformer.h.11.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
45
+ "text_model.transformer.h.11.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
46
+ "text_model.transformer.h.11.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
47
+ "text_model.transformer.h.11.mlp.fc1.bias": "model-00001-of-00002.safetensors",
48
+ "text_model.transformer.h.11.mlp.fc1.weight": "model-00001-of-00002.safetensors",
49
+ "text_model.transformer.h.11.mlp.fc2.bias": "model-00001-of-00002.safetensors",
50
+ "text_model.transformer.h.11.mlp.fc2.weight": "model-00001-of-00002.safetensors",
51
+ "text_model.transformer.h.12.ln.bias": "model-00001-of-00002.safetensors",
52
+ "text_model.transformer.h.12.ln.weight": "model-00001-of-00002.safetensors",
53
+ "text_model.transformer.h.12.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
54
+ "text_model.transformer.h.12.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
55
+ "text_model.transformer.h.12.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
56
+ "text_model.transformer.h.12.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
57
+ "text_model.transformer.h.12.mlp.fc1.bias": "model-00001-of-00002.safetensors",
58
+ "text_model.transformer.h.12.mlp.fc1.weight": "model-00001-of-00002.safetensors",
59
+ "text_model.transformer.h.12.mlp.fc2.bias": "model-00001-of-00002.safetensors",
60
+ "text_model.transformer.h.12.mlp.fc2.weight": "model-00001-of-00002.safetensors",
61
+ "text_model.transformer.h.13.ln.bias": "model-00002-of-00002.safetensors",
62
+ "text_model.transformer.h.13.ln.weight": "model-00002-of-00002.safetensors",
63
+ "text_model.transformer.h.13.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
64
+ "text_model.transformer.h.13.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
65
+ "text_model.transformer.h.13.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
66
+ "text_model.transformer.h.13.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
67
+ "text_model.transformer.h.13.mlp.fc1.bias": "model-00001-of-00002.safetensors",
68
+ "text_model.transformer.h.13.mlp.fc1.weight": "model-00001-of-00002.safetensors",
69
+ "text_model.transformer.h.13.mlp.fc2.bias": "model-00002-of-00002.safetensors",
70
+ "text_model.transformer.h.13.mlp.fc2.weight": "model-00002-of-00002.safetensors",
71
+ "text_model.transformer.h.14.ln.bias": "model-00002-of-00002.safetensors",
72
+ "text_model.transformer.h.14.ln.weight": "model-00002-of-00002.safetensors",
73
+ "text_model.transformer.h.14.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
74
+ "text_model.transformer.h.14.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
75
+ "text_model.transformer.h.14.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
76
+ "text_model.transformer.h.14.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
77
+ "text_model.transformer.h.14.mlp.fc1.bias": "model-00002-of-00002.safetensors",
78
+ "text_model.transformer.h.14.mlp.fc1.weight": "model-00002-of-00002.safetensors",
79
+ "text_model.transformer.h.14.mlp.fc2.bias": "model-00002-of-00002.safetensors",
80
+ "text_model.transformer.h.14.mlp.fc2.weight": "model-00002-of-00002.safetensors",
81
+ "text_model.transformer.h.15.ln.bias": "model-00002-of-00002.safetensors",
82
+ "text_model.transformer.h.15.ln.weight": "model-00002-of-00002.safetensors",
83
+ "text_model.transformer.h.15.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
84
+ "text_model.transformer.h.15.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
85
+ "text_model.transformer.h.15.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
86
+ "text_model.transformer.h.15.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
87
+ "text_model.transformer.h.15.mlp.fc1.bias": "model-00002-of-00002.safetensors",
88
+ "text_model.transformer.h.15.mlp.fc1.weight": "model-00002-of-00002.safetensors",
89
+ "text_model.transformer.h.15.mlp.fc2.bias": "model-00002-of-00002.safetensors",
90
+ "text_model.transformer.h.15.mlp.fc2.weight": "model-00002-of-00002.safetensors",
91
+ "text_model.transformer.h.16.ln.bias": "model-00002-of-00002.safetensors",
92
+ "text_model.transformer.h.16.ln.weight": "model-00002-of-00002.safetensors",
93
+ "text_model.transformer.h.16.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
94
+ "text_model.transformer.h.16.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
95
+ "text_model.transformer.h.16.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
96
+ "text_model.transformer.h.16.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
97
+ "text_model.transformer.h.16.mlp.fc1.bias": "model-00002-of-00002.safetensors",
98
+ "text_model.transformer.h.16.mlp.fc1.weight": "model-00002-of-00002.safetensors",
99
+ "text_model.transformer.h.16.mlp.fc2.bias": "model-00002-of-00002.safetensors",
100
+ "text_model.transformer.h.16.mlp.fc2.weight": "model-00002-of-00002.safetensors",
101
+ "text_model.transformer.h.17.ln.bias": "model-00002-of-00002.safetensors",
102
+ "text_model.transformer.h.17.ln.weight": "model-00002-of-00002.safetensors",
103
+ "text_model.transformer.h.17.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
104
+ "text_model.transformer.h.17.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
105
+ "text_model.transformer.h.17.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
106
+ "text_model.transformer.h.17.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
107
+ "text_model.transformer.h.17.mlp.fc1.bias": "model-00002-of-00002.safetensors",
108
+ "text_model.transformer.h.17.mlp.fc1.weight": "model-00002-of-00002.safetensors",
109
+ "text_model.transformer.h.17.mlp.fc2.bias": "model-00002-of-00002.safetensors",
110
+ "text_model.transformer.h.17.mlp.fc2.weight": "model-00002-of-00002.safetensors",
111
+ "text_model.transformer.h.18.ln.bias": "model-00002-of-00002.safetensors",
112
+ "text_model.transformer.h.18.ln.weight": "model-00002-of-00002.safetensors",
113
+ "text_model.transformer.h.18.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
114
+ "text_model.transformer.h.18.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
115
+ "text_model.transformer.h.18.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
116
+ "text_model.transformer.h.18.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
117
+ "text_model.transformer.h.18.mlp.fc1.bias": "model-00002-of-00002.safetensors",
118
+ "text_model.transformer.h.18.mlp.fc1.weight": "model-00002-of-00002.safetensors",
119
+ "text_model.transformer.h.18.mlp.fc2.bias": "model-00002-of-00002.safetensors",
120
+ "text_model.transformer.h.18.mlp.fc2.weight": "model-00002-of-00002.safetensors",
121
+ "text_model.transformer.h.19.ln.bias": "model-00002-of-00002.safetensors",
122
+ "text_model.transformer.h.19.ln.weight": "model-00002-of-00002.safetensors",
123
+ "text_model.transformer.h.19.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
124
+ "text_model.transformer.h.19.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
125
+ "text_model.transformer.h.19.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
126
+ "text_model.transformer.h.19.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
127
+ "text_model.transformer.h.19.mlp.fc1.bias": "model-00002-of-00002.safetensors",
128
+ "text_model.transformer.h.19.mlp.fc1.weight": "model-00002-of-00002.safetensors",
129
+ "text_model.transformer.h.19.mlp.fc2.bias": "model-00002-of-00002.safetensors",
130
+ "text_model.transformer.h.19.mlp.fc2.weight": "model-00002-of-00002.safetensors",
131
+ "text_model.transformer.h.2.ln.bias": "model-00001-of-00002.safetensors",
132
+ "text_model.transformer.h.2.ln.weight": "model-00001-of-00002.safetensors",
133
+ "text_model.transformer.h.2.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
134
+ "text_model.transformer.h.2.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
135
+ "text_model.transformer.h.2.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
136
+ "text_model.transformer.h.2.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
137
+ "text_model.transformer.h.2.mlp.fc1.bias": "model-00001-of-00002.safetensors",
138
+ "text_model.transformer.h.2.mlp.fc1.weight": "model-00001-of-00002.safetensors",
139
+ "text_model.transformer.h.2.mlp.fc2.bias": "model-00001-of-00002.safetensors",
140
+ "text_model.transformer.h.2.mlp.fc2.weight": "model-00001-of-00002.safetensors",
141
+ "text_model.transformer.h.20.ln.bias": "model-00002-of-00002.safetensors",
142
+ "text_model.transformer.h.20.ln.weight": "model-00002-of-00002.safetensors",
143
+ "text_model.transformer.h.20.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
144
+ "text_model.transformer.h.20.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
145
+ "text_model.transformer.h.20.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
146
+ "text_model.transformer.h.20.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
147
+ "text_model.transformer.h.20.mlp.fc1.bias": "model-00002-of-00002.safetensors",
148
+ "text_model.transformer.h.20.mlp.fc1.weight": "model-00002-of-00002.safetensors",
149
+ "text_model.transformer.h.20.mlp.fc2.bias": "model-00002-of-00002.safetensors",
150
+ "text_model.transformer.h.20.mlp.fc2.weight": "model-00002-of-00002.safetensors",
151
+ "text_model.transformer.h.21.ln.bias": "model-00002-of-00002.safetensors",
152
+ "text_model.transformer.h.21.ln.weight": "model-00002-of-00002.safetensors",
153
+ "text_model.transformer.h.21.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
154
+ "text_model.transformer.h.21.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
155
+ "text_model.transformer.h.21.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
156
+ "text_model.transformer.h.21.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
157
+ "text_model.transformer.h.21.mlp.fc1.bias": "model-00002-of-00002.safetensors",
158
+ "text_model.transformer.h.21.mlp.fc1.weight": "model-00002-of-00002.safetensors",
159
+ "text_model.transformer.h.21.mlp.fc2.bias": "model-00002-of-00002.safetensors",
160
+ "text_model.transformer.h.21.mlp.fc2.weight": "model-00002-of-00002.safetensors",
161
+ "text_model.transformer.h.22.ln.bias": "model-00002-of-00002.safetensors",
162
+ "text_model.transformer.h.22.ln.weight": "model-00002-of-00002.safetensors",
163
+ "text_model.transformer.h.22.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
164
+ "text_model.transformer.h.22.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
165
+ "text_model.transformer.h.22.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
166
+ "text_model.transformer.h.22.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
167
+ "text_model.transformer.h.22.mlp.fc1.bias": "model-00002-of-00002.safetensors",
168
+ "text_model.transformer.h.22.mlp.fc1.weight": "model-00002-of-00002.safetensors",
169
+ "text_model.transformer.h.22.mlp.fc2.bias": "model-00002-of-00002.safetensors",
170
+ "text_model.transformer.h.22.mlp.fc2.weight": "model-00002-of-00002.safetensors",
171
+ "text_model.transformer.h.23.ln.bias": "model-00002-of-00002.safetensors",
172
+ "text_model.transformer.h.23.ln.weight": "model-00002-of-00002.safetensors",
173
+ "text_model.transformer.h.23.mixer.Wqkv.bias": "model-00002-of-00002.safetensors",
174
+ "text_model.transformer.h.23.mixer.Wqkv.weight": "model-00002-of-00002.safetensors",
175
+ "text_model.transformer.h.23.mixer.out_proj.bias": "model-00002-of-00002.safetensors",
176
+ "text_model.transformer.h.23.mixer.out_proj.weight": "model-00002-of-00002.safetensors",
177
+ "text_model.transformer.h.23.mlp.fc1.bias": "model-00002-of-00002.safetensors",
178
+ "text_model.transformer.h.23.mlp.fc1.weight": "model-00002-of-00002.safetensors",
179
+ "text_model.transformer.h.23.mlp.fc2.bias": "model-00002-of-00002.safetensors",
180
+ "text_model.transformer.h.23.mlp.fc2.weight": "model-00002-of-00002.safetensors",
181
+ "text_model.transformer.h.3.ln.bias": "model-00001-of-00002.safetensors",
182
+ "text_model.transformer.h.3.ln.weight": "model-00001-of-00002.safetensors",
183
+ "text_model.transformer.h.3.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
184
+ "text_model.transformer.h.3.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
185
+ "text_model.transformer.h.3.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
186
+ "text_model.transformer.h.3.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
187
+ "text_model.transformer.h.3.mlp.fc1.bias": "model-00001-of-00002.safetensors",
188
+ "text_model.transformer.h.3.mlp.fc1.weight": "model-00001-of-00002.safetensors",
189
+ "text_model.transformer.h.3.mlp.fc2.bias": "model-00001-of-00002.safetensors",
190
+ "text_model.transformer.h.3.mlp.fc2.weight": "model-00001-of-00002.safetensors",
191
+ "text_model.transformer.h.4.ln.bias": "model-00001-of-00002.safetensors",
192
+ "text_model.transformer.h.4.ln.weight": "model-00001-of-00002.safetensors",
193
+ "text_model.transformer.h.4.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
194
+ "text_model.transformer.h.4.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
195
+ "text_model.transformer.h.4.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
196
+ "text_model.transformer.h.4.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
197
+ "text_model.transformer.h.4.mlp.fc1.bias": "model-00001-of-00002.safetensors",
198
+ "text_model.transformer.h.4.mlp.fc1.weight": "model-00001-of-00002.safetensors",
199
+ "text_model.transformer.h.4.mlp.fc2.bias": "model-00001-of-00002.safetensors",
200
+ "text_model.transformer.h.4.mlp.fc2.weight": "model-00001-of-00002.safetensors",
201
+ "text_model.transformer.h.5.ln.bias": "model-00001-of-00002.safetensors",
202
+ "text_model.transformer.h.5.ln.weight": "model-00001-of-00002.safetensors",
203
+ "text_model.transformer.h.5.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
204
+ "text_model.transformer.h.5.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
205
+ "text_model.transformer.h.5.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
206
+ "text_model.transformer.h.5.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
207
+ "text_model.transformer.h.5.mlp.fc1.bias": "model-00001-of-00002.safetensors",
208
+ "text_model.transformer.h.5.mlp.fc1.weight": "model-00001-of-00002.safetensors",
209
+ "text_model.transformer.h.5.mlp.fc2.bias": "model-00001-of-00002.safetensors",
210
+ "text_model.transformer.h.5.mlp.fc2.weight": "model-00001-of-00002.safetensors",
211
+ "text_model.transformer.h.6.ln.bias": "model-00001-of-00002.safetensors",
212
+ "text_model.transformer.h.6.ln.weight": "model-00001-of-00002.safetensors",
213
+ "text_model.transformer.h.6.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
214
+ "text_model.transformer.h.6.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
215
+ "text_model.transformer.h.6.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
216
+ "text_model.transformer.h.6.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
217
+ "text_model.transformer.h.6.mlp.fc1.bias": "model-00001-of-00002.safetensors",
218
+ "text_model.transformer.h.6.mlp.fc1.weight": "model-00001-of-00002.safetensors",
219
+ "text_model.transformer.h.6.mlp.fc2.bias": "model-00001-of-00002.safetensors",
220
+ "text_model.transformer.h.6.mlp.fc2.weight": "model-00001-of-00002.safetensors",
221
+ "text_model.transformer.h.7.ln.bias": "model-00001-of-00002.safetensors",
222
+ "text_model.transformer.h.7.ln.weight": "model-00001-of-00002.safetensors",
223
+ "text_model.transformer.h.7.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
224
+ "text_model.transformer.h.7.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
225
+ "text_model.transformer.h.7.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
226
+ "text_model.transformer.h.7.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
227
+ "text_model.transformer.h.7.mlp.fc1.bias": "model-00001-of-00002.safetensors",
228
+ "text_model.transformer.h.7.mlp.fc1.weight": "model-00001-of-00002.safetensors",
229
+ "text_model.transformer.h.7.mlp.fc2.bias": "model-00001-of-00002.safetensors",
230
+ "text_model.transformer.h.7.mlp.fc2.weight": "model-00001-of-00002.safetensors",
231
+ "text_model.transformer.h.8.ln.bias": "model-00001-of-00002.safetensors",
232
+ "text_model.transformer.h.8.ln.weight": "model-00001-of-00002.safetensors",
233
+ "text_model.transformer.h.8.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
234
+ "text_model.transformer.h.8.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
235
+ "text_model.transformer.h.8.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
236
+ "text_model.transformer.h.8.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
237
+ "text_model.transformer.h.8.mlp.fc1.bias": "model-00001-of-00002.safetensors",
238
+ "text_model.transformer.h.8.mlp.fc1.weight": "model-00001-of-00002.safetensors",
239
+ "text_model.transformer.h.8.mlp.fc2.bias": "model-00001-of-00002.safetensors",
240
+ "text_model.transformer.h.8.mlp.fc2.weight": "model-00001-of-00002.safetensors",
241
+ "text_model.transformer.h.9.ln.bias": "model-00001-of-00002.safetensors",
242
+ "text_model.transformer.h.9.ln.weight": "model-00001-of-00002.safetensors",
243
+ "text_model.transformer.h.9.mixer.Wqkv.bias": "model-00001-of-00002.safetensors",
244
+ "text_model.transformer.h.9.mixer.Wqkv.weight": "model-00001-of-00002.safetensors",
245
+ "text_model.transformer.h.9.mixer.out_proj.bias": "model-00001-of-00002.safetensors",
246
+ "text_model.transformer.h.9.mixer.out_proj.weight": "model-00001-of-00002.safetensors",
247
+ "text_model.transformer.h.9.mlp.fc1.bias": "model-00001-of-00002.safetensors",
248
+ "text_model.transformer.h.9.mlp.fc1.weight": "model-00001-of-00002.safetensors",
249
+ "text_model.transformer.h.9.mlp.fc2.bias": "model-00001-of-00002.safetensors",
250
+ "text_model.transformer.h.9.mlp.fc2.weight": "model-00001-of-00002.safetensors",
251
+ "vision_encoder.encoder.model.visual.blocks.0.attn.proj.bias": "model-00001-of-00002.safetensors",
252
+ "vision_encoder.encoder.model.visual.blocks.0.attn.proj.weight": "model-00001-of-00002.safetensors",
253
+ "vision_encoder.encoder.model.visual.blocks.0.attn.qkv.bias": "model-00001-of-00002.safetensors",
254
+ "vision_encoder.encoder.model.visual.blocks.0.attn.qkv.weight": "model-00001-of-00002.safetensors",
255
+ "vision_encoder.encoder.model.visual.blocks.0.mlp.fc1.bias": "model-00001-of-00002.safetensors",
256
+ "vision_encoder.encoder.model.visual.blocks.0.mlp.fc1.weight": "model-00001-of-00002.safetensors",
257
+ "vision_encoder.encoder.model.visual.blocks.0.mlp.fc2.bias": "model-00001-of-00002.safetensors",
258
+ "vision_encoder.encoder.model.visual.blocks.0.mlp.fc2.weight": "model-00001-of-00002.safetensors",
259
+ "vision_encoder.encoder.model.visual.blocks.0.norm1.bias": "model-00001-of-00002.safetensors",
260
+ "vision_encoder.encoder.model.visual.blocks.0.norm1.weight": "model-00001-of-00002.safetensors",
261
+ "vision_encoder.encoder.model.visual.blocks.0.norm2.bias": "model-00001-of-00002.safetensors",
262
+ "vision_encoder.encoder.model.visual.blocks.0.norm2.weight": "model-00001-of-00002.safetensors",
263
+ "vision_encoder.encoder.model.visual.blocks.1.attn.proj.bias": "model-00001-of-00002.safetensors",
264
+ "vision_encoder.encoder.model.visual.blocks.1.attn.proj.weight": "model-00001-of-00002.safetensors",
265
+ "vision_encoder.encoder.model.visual.blocks.1.attn.qkv.bias": "model-00001-of-00002.safetensors",
266
+ "vision_encoder.encoder.model.visual.blocks.1.attn.qkv.weight": "model-00001-of-00002.safetensors",
267
+ "vision_encoder.encoder.model.visual.blocks.1.mlp.fc1.bias": "model-00001-of-00002.safetensors",
268
+ "vision_encoder.encoder.model.visual.blocks.1.mlp.fc1.weight": "model-00001-of-00002.safetensors",
269
+ "vision_encoder.encoder.model.visual.blocks.1.mlp.fc2.bias": "model-00001-of-00002.safetensors",
270
+ "vision_encoder.encoder.model.visual.blocks.1.mlp.fc2.weight": "model-00001-of-00002.safetensors",
271
+ "vision_encoder.encoder.model.visual.blocks.1.norm1.bias": "model-00001-of-00002.safetensors",
272
+ "vision_encoder.encoder.model.visual.blocks.1.norm1.weight": "model-00001-of-00002.safetensors",
273
+ "vision_encoder.encoder.model.visual.blocks.1.norm2.bias": "model-00001-of-00002.safetensors",
274
+ "vision_encoder.encoder.model.visual.blocks.1.norm2.weight": "model-00001-of-00002.safetensors",
275
+ "vision_encoder.encoder.model.visual.blocks.10.attn.proj.bias": "model-00001-of-00002.safetensors",
276
+ "vision_encoder.encoder.model.visual.blocks.10.attn.proj.weight": "model-00001-of-00002.safetensors",
277
+ "vision_encoder.encoder.model.visual.blocks.10.attn.qkv.bias": "model-00001-of-00002.safetensors",
278
+ "vision_encoder.encoder.model.visual.blocks.10.attn.qkv.weight": "model-00001-of-00002.safetensors",
279
+ "vision_encoder.encoder.model.visual.blocks.10.mlp.fc1.bias": "model-00001-of-00002.safetensors",
280
+ "vision_encoder.encoder.model.visual.blocks.10.mlp.fc1.weight": "model-00001-of-00002.safetensors",
281
+ "vision_encoder.encoder.model.visual.blocks.10.mlp.fc2.bias": "model-00001-of-00002.safetensors",
282
+ "vision_encoder.encoder.model.visual.blocks.10.mlp.fc2.weight": "model-00001-of-00002.safetensors",
283
+ "vision_encoder.encoder.model.visual.blocks.10.norm1.bias": "model-00001-of-00002.safetensors",
284
+ "vision_encoder.encoder.model.visual.blocks.10.norm1.weight": "model-00001-of-00002.safetensors",
285
+ "vision_encoder.encoder.model.visual.blocks.10.norm2.bias": "model-00001-of-00002.safetensors",
286
+ "vision_encoder.encoder.model.visual.blocks.10.norm2.weight": "model-00001-of-00002.safetensors",
287
+ "vision_encoder.encoder.model.visual.blocks.11.attn.proj.bias": "model-00001-of-00002.safetensors",
288
+ "vision_encoder.encoder.model.visual.blocks.11.attn.proj.weight": "model-00001-of-00002.safetensors",
289
+ "vision_encoder.encoder.model.visual.blocks.11.attn.qkv.bias": "model-00001-of-00002.safetensors",
290
+ "vision_encoder.encoder.model.visual.blocks.11.attn.qkv.weight": "model-00001-of-00002.safetensors",
291
+ "vision_encoder.encoder.model.visual.blocks.11.mlp.fc1.bias": "model-00001-of-00002.safetensors",
292
+ "vision_encoder.encoder.model.visual.blocks.11.mlp.fc1.weight": "model-00001-of-00002.safetensors",
293
+ "vision_encoder.encoder.model.visual.blocks.11.mlp.fc2.bias": "model-00001-of-00002.safetensors",
294
+ "vision_encoder.encoder.model.visual.blocks.11.mlp.fc2.weight": "model-00001-of-00002.safetensors",
295
+ "vision_encoder.encoder.model.visual.blocks.11.norm1.bias": "model-00001-of-00002.safetensors",
296
+ "vision_encoder.encoder.model.visual.blocks.11.norm1.weight": "model-00001-of-00002.safetensors",
297
+ "vision_encoder.encoder.model.visual.blocks.11.norm2.bias": "model-00001-of-00002.safetensors",
298
+ "vision_encoder.encoder.model.visual.blocks.11.norm2.weight": "model-00001-of-00002.safetensors",
299
+ "vision_encoder.encoder.model.visual.blocks.12.attn.proj.bias": "model-00001-of-00002.safetensors",
300
+ "vision_encoder.encoder.model.visual.blocks.12.attn.proj.weight": "model-00001-of-00002.safetensors",
301
+ "vision_encoder.encoder.model.visual.blocks.12.attn.qkv.bias": "model-00001-of-00002.safetensors",
302
+ "vision_encoder.encoder.model.visual.blocks.12.attn.qkv.weight": "model-00001-of-00002.safetensors",
303
+ "vision_encoder.encoder.model.visual.blocks.12.mlp.fc1.bias": "model-00001-of-00002.safetensors",
304
+ "vision_encoder.encoder.model.visual.blocks.12.mlp.fc1.weight": "model-00001-of-00002.safetensors",
305
+ "vision_encoder.encoder.model.visual.blocks.12.mlp.fc2.bias": "model-00001-of-00002.safetensors",
306
+ "vision_encoder.encoder.model.visual.blocks.12.mlp.fc2.weight": "model-00001-of-00002.safetensors",
307
+ "vision_encoder.encoder.model.visual.blocks.12.norm1.bias": "model-00001-of-00002.safetensors",
308
+ "vision_encoder.encoder.model.visual.blocks.12.norm1.weight": "model-00001-of-00002.safetensors",
309
+ "vision_encoder.encoder.model.visual.blocks.12.norm2.bias": "model-00001-of-00002.safetensors",
310
+ "vision_encoder.encoder.model.visual.blocks.12.norm2.weight": "model-00001-of-00002.safetensors",
311
+ "vision_encoder.encoder.model.visual.blocks.13.attn.proj.bias": "model-00001-of-00002.safetensors",
312
+ "vision_encoder.encoder.model.visual.blocks.13.attn.proj.weight": "model-00001-of-00002.safetensors",
313
+ "vision_encoder.encoder.model.visual.blocks.13.attn.qkv.bias": "model-00001-of-00002.safetensors",
314
+ "vision_encoder.encoder.model.visual.blocks.13.attn.qkv.weight": "model-00001-of-00002.safetensors",
315
+ "vision_encoder.encoder.model.visual.blocks.13.mlp.fc1.bias": "model-00001-of-00002.safetensors",
316
+ "vision_encoder.encoder.model.visual.blocks.13.mlp.fc1.weight": "model-00001-of-00002.safetensors",
317
+ "vision_encoder.encoder.model.visual.blocks.13.mlp.fc2.bias": "model-00001-of-00002.safetensors",
318
+ "vision_encoder.encoder.model.visual.blocks.13.mlp.fc2.weight": "model-00001-of-00002.safetensors",
319
+ "vision_encoder.encoder.model.visual.blocks.13.norm1.bias": "model-00001-of-00002.safetensors",
320
+ "vision_encoder.encoder.model.visual.blocks.13.norm1.weight": "model-00001-of-00002.safetensors",
321
+ "vision_encoder.encoder.model.visual.blocks.13.norm2.bias": "model-00001-of-00002.safetensors",
322
+ "vision_encoder.encoder.model.visual.blocks.13.norm2.weight": "model-00001-of-00002.safetensors",
323
+ "vision_encoder.encoder.model.visual.blocks.14.attn.proj.bias": "model-00001-of-00002.safetensors",
324
+ "vision_encoder.encoder.model.visual.blocks.14.attn.proj.weight": "model-00001-of-00002.safetensors",
325
+ "vision_encoder.encoder.model.visual.blocks.14.attn.qkv.bias": "model-00001-of-00002.safetensors",
326
+ "vision_encoder.encoder.model.visual.blocks.14.attn.qkv.weight": "model-00001-of-00002.safetensors",
327
+ "vision_encoder.encoder.model.visual.blocks.14.mlp.fc1.bias": "model-00001-of-00002.safetensors",
328
+ "vision_encoder.encoder.model.visual.blocks.14.mlp.fc1.weight": "model-00001-of-00002.safetensors",
329
+ "vision_encoder.encoder.model.visual.blocks.14.mlp.fc2.bias": "model-00001-of-00002.safetensors",
330
+ "vision_encoder.encoder.model.visual.blocks.14.mlp.fc2.weight": "model-00001-of-00002.safetensors",
331
+ "vision_encoder.encoder.model.visual.blocks.14.norm1.bias": "model-00001-of-00002.safetensors",
332
+ "vision_encoder.encoder.model.visual.blocks.14.norm1.weight": "model-00001-of-00002.safetensors",
333
+ "vision_encoder.encoder.model.visual.blocks.14.norm2.bias": "model-00001-of-00002.safetensors",
334
+ "vision_encoder.encoder.model.visual.blocks.14.norm2.weight": "model-00001-of-00002.safetensors",
335
+ "vision_encoder.encoder.model.visual.blocks.15.attn.proj.bias": "model-00001-of-00002.safetensors",
336
+ "vision_encoder.encoder.model.visual.blocks.15.attn.proj.weight": "model-00001-of-00002.safetensors",
337
+ "vision_encoder.encoder.model.visual.blocks.15.attn.qkv.bias": "model-00001-of-00002.safetensors",
338
+ "vision_encoder.encoder.model.visual.blocks.15.attn.qkv.weight": "model-00001-of-00002.safetensors",
339
+ "vision_encoder.encoder.model.visual.blocks.15.mlp.fc1.bias": "model-00001-of-00002.safetensors",
340
+ "vision_encoder.encoder.model.visual.blocks.15.mlp.fc1.weight": "model-00001-of-00002.safetensors",
341
+ "vision_encoder.encoder.model.visual.blocks.15.mlp.fc2.bias": "model-00001-of-00002.safetensors",
342
+ "vision_encoder.encoder.model.visual.blocks.15.mlp.fc2.weight": "model-00001-of-00002.safetensors",
343
+ "vision_encoder.encoder.model.visual.blocks.15.norm1.bias": "model-00001-of-00002.safetensors",
344
+ "vision_encoder.encoder.model.visual.blocks.15.norm1.weight": "model-00001-of-00002.safetensors",
345
+ "vision_encoder.encoder.model.visual.blocks.15.norm2.bias": "model-00001-of-00002.safetensors",
346
+ "vision_encoder.encoder.model.visual.blocks.15.norm2.weight": "model-00001-of-00002.safetensors",
347
+ "vision_encoder.encoder.model.visual.blocks.16.attn.proj.bias": "model-00001-of-00002.safetensors",
348
+ "vision_encoder.encoder.model.visual.blocks.16.attn.proj.weight": "model-00001-of-00002.safetensors",
349
+ "vision_encoder.encoder.model.visual.blocks.16.attn.qkv.bias": "model-00001-of-00002.safetensors",
350
+ "vision_encoder.encoder.model.visual.blocks.16.attn.qkv.weight": "model-00001-of-00002.safetensors",
351
+ "vision_encoder.encoder.model.visual.blocks.16.mlp.fc1.bias": "model-00001-of-00002.safetensors",
352
+ "vision_encoder.encoder.model.visual.blocks.16.mlp.fc1.weight": "model-00001-of-00002.safetensors",
353
+ "vision_encoder.encoder.model.visual.blocks.16.mlp.fc2.bias": "model-00001-of-00002.safetensors",
354
+ "vision_encoder.encoder.model.visual.blocks.16.mlp.fc2.weight": "model-00001-of-00002.safetensors",
355
+ "vision_encoder.encoder.model.visual.blocks.16.norm1.bias": "model-00001-of-00002.safetensors",
356
+ "vision_encoder.encoder.model.visual.blocks.16.norm1.weight": "model-00001-of-00002.safetensors",
357
+ "vision_encoder.encoder.model.visual.blocks.16.norm2.bias": "model-00001-of-00002.safetensors",
358
+ "vision_encoder.encoder.model.visual.blocks.16.norm2.weight": "model-00001-of-00002.safetensors",
359
+ "vision_encoder.encoder.model.visual.blocks.17.attn.proj.bias": "model-00001-of-00002.safetensors",
360
+ "vision_encoder.encoder.model.visual.blocks.17.attn.proj.weight": "model-00001-of-00002.safetensors",
361
+ "vision_encoder.encoder.model.visual.blocks.17.attn.qkv.bias": "model-00001-of-00002.safetensors",
362
+ "vision_encoder.encoder.model.visual.blocks.17.attn.qkv.weight": "model-00001-of-00002.safetensors",
363
+ "vision_encoder.encoder.model.visual.blocks.17.mlp.fc1.bias": "model-00001-of-00002.safetensors",
364
+ "vision_encoder.encoder.model.visual.blocks.17.mlp.fc1.weight": "model-00001-of-00002.safetensors",
365
+ "vision_encoder.encoder.model.visual.blocks.17.mlp.fc2.bias": "model-00001-of-00002.safetensors",
366
+ "vision_encoder.encoder.model.visual.blocks.17.mlp.fc2.weight": "model-00001-of-00002.safetensors",
367
+ "vision_encoder.encoder.model.visual.blocks.17.norm1.bias": "model-00001-of-00002.safetensors",
368
+ "vision_encoder.encoder.model.visual.blocks.17.norm1.weight": "model-00001-of-00002.safetensors",
369
+ "vision_encoder.encoder.model.visual.blocks.17.norm2.bias": "model-00001-of-00002.safetensors",
370
+ "vision_encoder.encoder.model.visual.blocks.17.norm2.weight": "model-00001-of-00002.safetensors",
371
+ "vision_encoder.encoder.model.visual.blocks.18.attn.proj.bias": "model-00001-of-00002.safetensors",
372
+ "vision_encoder.encoder.model.visual.blocks.18.attn.proj.weight": "model-00001-of-00002.safetensors",
373
+ "vision_encoder.encoder.model.visual.blocks.18.attn.qkv.bias": "model-00001-of-00002.safetensors",
374
+ "vision_encoder.encoder.model.visual.blocks.18.attn.qkv.weight": "model-00001-of-00002.safetensors",
375
+ "vision_encoder.encoder.model.visual.blocks.18.mlp.fc1.bias": "model-00001-of-00002.safetensors",
376
+ "vision_encoder.encoder.model.visual.blocks.18.mlp.fc1.weight": "model-00001-of-00002.safetensors",
377
+ "vision_encoder.encoder.model.visual.blocks.18.mlp.fc2.bias": "model-00001-of-00002.safetensors",
378
+ "vision_encoder.encoder.model.visual.blocks.18.mlp.fc2.weight": "model-00001-of-00002.safetensors",
379
+ "vision_encoder.encoder.model.visual.blocks.18.norm1.bias": "model-00001-of-00002.safetensors",
380
+ "vision_encoder.encoder.model.visual.blocks.18.norm1.weight": "model-00001-of-00002.safetensors",
381
+ "vision_encoder.encoder.model.visual.blocks.18.norm2.bias": "model-00001-of-00002.safetensors",
382
+ "vision_encoder.encoder.model.visual.blocks.18.norm2.weight": "model-00001-of-00002.safetensors",
383
+ "vision_encoder.encoder.model.visual.blocks.19.attn.proj.bias": "model-00001-of-00002.safetensors",
384
+ "vision_encoder.encoder.model.visual.blocks.19.attn.proj.weight": "model-00001-of-00002.safetensors",
385
+ "vision_encoder.encoder.model.visual.blocks.19.attn.qkv.bias": "model-00001-of-00002.safetensors",
386
+ "vision_encoder.encoder.model.visual.blocks.19.attn.qkv.weight": "model-00001-of-00002.safetensors",
387
+ "vision_encoder.encoder.model.visual.blocks.19.mlp.fc1.bias": "model-00001-of-00002.safetensors",
388
+ "vision_encoder.encoder.model.visual.blocks.19.mlp.fc1.weight": "model-00001-of-00002.safetensors",
389
+ "vision_encoder.encoder.model.visual.blocks.19.mlp.fc2.bias": "model-00001-of-00002.safetensors",
390
+ "vision_encoder.encoder.model.visual.blocks.19.mlp.fc2.weight": "model-00001-of-00002.safetensors",
391
+ "vision_encoder.encoder.model.visual.blocks.19.norm1.bias": "model-00001-of-00002.safetensors",
392
+ "vision_encoder.encoder.model.visual.blocks.19.norm1.weight": "model-00001-of-00002.safetensors",
393
+ "vision_encoder.encoder.model.visual.blocks.19.norm2.bias": "model-00001-of-00002.safetensors",
394
+ "vision_encoder.encoder.model.visual.blocks.19.norm2.weight": "model-00001-of-00002.safetensors",
395
+ "vision_encoder.encoder.model.visual.blocks.2.attn.proj.bias": "model-00001-of-00002.safetensors",
396
+ "vision_encoder.encoder.model.visual.blocks.2.attn.proj.weight": "model-00001-of-00002.safetensors",
397
+ "vision_encoder.encoder.model.visual.blocks.2.attn.qkv.bias": "model-00001-of-00002.safetensors",
398
+ "vision_encoder.encoder.model.visual.blocks.2.attn.qkv.weight": "model-00001-of-00002.safetensors",
399
+ "vision_encoder.encoder.model.visual.blocks.2.mlp.fc1.bias": "model-00001-of-00002.safetensors",
400
+ "vision_encoder.encoder.model.visual.blocks.2.mlp.fc1.weight": "model-00001-of-00002.safetensors",
401
+ "vision_encoder.encoder.model.visual.blocks.2.mlp.fc2.bias": "model-00001-of-00002.safetensors",
402
+ "vision_encoder.encoder.model.visual.blocks.2.mlp.fc2.weight": "model-00001-of-00002.safetensors",
403
+ "vision_encoder.encoder.model.visual.blocks.2.norm1.bias": "model-00001-of-00002.safetensors",
404
+ "vision_encoder.encoder.model.visual.blocks.2.norm1.weight": "model-00001-of-00002.safetensors",
405
+ "vision_encoder.encoder.model.visual.blocks.2.norm2.bias": "model-00001-of-00002.safetensors",
406
+ "vision_encoder.encoder.model.visual.blocks.2.norm2.weight": "model-00001-of-00002.safetensors",
407
+ "vision_encoder.encoder.model.visual.blocks.20.attn.proj.bias": "model-00001-of-00002.safetensors",
408
+ "vision_encoder.encoder.model.visual.blocks.20.attn.proj.weight": "model-00001-of-00002.safetensors",
409
+ "vision_encoder.encoder.model.visual.blocks.20.attn.qkv.bias": "model-00001-of-00002.safetensors",
410
+ "vision_encoder.encoder.model.visual.blocks.20.attn.qkv.weight": "model-00001-of-00002.safetensors",
411
+ "vision_encoder.encoder.model.visual.blocks.20.mlp.fc1.bias": "model-00001-of-00002.safetensors",
412
+ "vision_encoder.encoder.model.visual.blocks.20.mlp.fc1.weight": "model-00001-of-00002.safetensors",
413
+ "vision_encoder.encoder.model.visual.blocks.20.mlp.fc2.bias": "model-00001-of-00002.safetensors",
414
+ "vision_encoder.encoder.model.visual.blocks.20.mlp.fc2.weight": "model-00001-of-00002.safetensors",
415
+ "vision_encoder.encoder.model.visual.blocks.20.norm1.bias": "model-00001-of-00002.safetensors",
416
+ "vision_encoder.encoder.model.visual.blocks.20.norm1.weight": "model-00001-of-00002.safetensors",
417
+ "vision_encoder.encoder.model.visual.blocks.20.norm2.bias": "model-00001-of-00002.safetensors",
418
+ "vision_encoder.encoder.model.visual.blocks.20.norm2.weight": "model-00001-of-00002.safetensors",
419
+ "vision_encoder.encoder.model.visual.blocks.21.attn.proj.bias": "model-00001-of-00002.safetensors",
420
+ "vision_encoder.encoder.model.visual.blocks.21.attn.proj.weight": "model-00001-of-00002.safetensors",
421
+ "vision_encoder.encoder.model.visual.blocks.21.attn.qkv.bias": "model-00001-of-00002.safetensors",
422
+ "vision_encoder.encoder.model.visual.blocks.21.attn.qkv.weight": "model-00001-of-00002.safetensors",
423
+ "vision_encoder.encoder.model.visual.blocks.21.mlp.fc1.bias": "model-00001-of-00002.safetensors",
424
+ "vision_encoder.encoder.model.visual.blocks.21.mlp.fc1.weight": "model-00001-of-00002.safetensors",
425
+ "vision_encoder.encoder.model.visual.blocks.21.mlp.fc2.bias": "model-00001-of-00002.safetensors",
426
+ "vision_encoder.encoder.model.visual.blocks.21.mlp.fc2.weight": "model-00001-of-00002.safetensors",
427
+ "vision_encoder.encoder.model.visual.blocks.21.norm1.bias": "model-00001-of-00002.safetensors",
428
+ "vision_encoder.encoder.model.visual.blocks.21.norm1.weight": "model-00001-of-00002.safetensors",
429
+ "vision_encoder.encoder.model.visual.blocks.21.norm2.bias": "model-00001-of-00002.safetensors",
430
+ "vision_encoder.encoder.model.visual.blocks.21.norm2.weight": "model-00001-of-00002.safetensors",
431
+ "vision_encoder.encoder.model.visual.blocks.22.attn.proj.bias": "model-00001-of-00002.safetensors",
432
+ "vision_encoder.encoder.model.visual.blocks.22.attn.proj.weight": "model-00001-of-00002.safetensors",
433
+ "vision_encoder.encoder.model.visual.blocks.22.attn.qkv.bias": "model-00001-of-00002.safetensors",
434
+ "vision_encoder.encoder.model.visual.blocks.22.attn.qkv.weight": "model-00001-of-00002.safetensors",
435
+ "vision_encoder.encoder.model.visual.blocks.22.mlp.fc1.bias": "model-00001-of-00002.safetensors",
436
+ "vision_encoder.encoder.model.visual.blocks.22.mlp.fc1.weight": "model-00001-of-00002.safetensors",
437
+ "vision_encoder.encoder.model.visual.blocks.22.mlp.fc2.bias": "model-00001-of-00002.safetensors",
438
+ "vision_encoder.encoder.model.visual.blocks.22.mlp.fc2.weight": "model-00001-of-00002.safetensors",
439
+ "vision_encoder.encoder.model.visual.blocks.22.norm1.bias": "model-00001-of-00002.safetensors",
440
+ "vision_encoder.encoder.model.visual.blocks.22.norm1.weight": "model-00001-of-00002.safetensors",
441
+ "vision_encoder.encoder.model.visual.blocks.22.norm2.bias": "model-00001-of-00002.safetensors",
442
+ "vision_encoder.encoder.model.visual.blocks.22.norm2.weight": "model-00001-of-00002.safetensors",
443
+ "vision_encoder.encoder.model.visual.blocks.23.attn.proj.bias": "model-00001-of-00002.safetensors",
444
+ "vision_encoder.encoder.model.visual.blocks.23.attn.proj.weight": "model-00001-of-00002.safetensors",
445
+ "vision_encoder.encoder.model.visual.blocks.23.attn.qkv.bias": "model-00001-of-00002.safetensors",
446
+ "vision_encoder.encoder.model.visual.blocks.23.attn.qkv.weight": "model-00001-of-00002.safetensors",
447
+ "vision_encoder.encoder.model.visual.blocks.23.mlp.fc1.bias": "model-00001-of-00002.safetensors",
448
+ "vision_encoder.encoder.model.visual.blocks.23.mlp.fc1.weight": "model-00001-of-00002.safetensors",
449
+ "vision_encoder.encoder.model.visual.blocks.23.mlp.fc2.bias": "model-00001-of-00002.safetensors",
450
+ "vision_encoder.encoder.model.visual.blocks.23.mlp.fc2.weight": "model-00001-of-00002.safetensors",
451
+ "vision_encoder.encoder.model.visual.blocks.23.norm1.bias": "model-00001-of-00002.safetensors",
452
+ "vision_encoder.encoder.model.visual.blocks.23.norm1.weight": "model-00001-of-00002.safetensors",
453
+ "vision_encoder.encoder.model.visual.blocks.23.norm2.bias": "model-00001-of-00002.safetensors",
454
+ "vision_encoder.encoder.model.visual.blocks.23.norm2.weight": "model-00001-of-00002.safetensors",
455
+ "vision_encoder.encoder.model.visual.blocks.24.attn.proj.bias": "model-00001-of-00002.safetensors",
456
+ "vision_encoder.encoder.model.visual.blocks.24.attn.proj.weight": "model-00001-of-00002.safetensors",
457
+ "vision_encoder.encoder.model.visual.blocks.24.attn.qkv.bias": "model-00001-of-00002.safetensors",
458
+ "vision_encoder.encoder.model.visual.blocks.24.attn.qkv.weight": "model-00001-of-00002.safetensors",
459
+ "vision_encoder.encoder.model.visual.blocks.24.mlp.fc1.bias": "model-00001-of-00002.safetensors",
460
+ "vision_encoder.encoder.model.visual.blocks.24.mlp.fc1.weight": "model-00001-of-00002.safetensors",
461
+ "vision_encoder.encoder.model.visual.blocks.24.mlp.fc2.bias": "model-00001-of-00002.safetensors",
462
+ "vision_encoder.encoder.model.visual.blocks.24.mlp.fc2.weight": "model-00001-of-00002.safetensors",
463
+ "vision_encoder.encoder.model.visual.blocks.24.norm1.bias": "model-00001-of-00002.safetensors",
464
+ "vision_encoder.encoder.model.visual.blocks.24.norm1.weight": "model-00001-of-00002.safetensors",
465
+ "vision_encoder.encoder.model.visual.blocks.24.norm2.bias": "model-00001-of-00002.safetensors",
466
+ "vision_encoder.encoder.model.visual.blocks.24.norm2.weight": "model-00001-of-00002.safetensors",
467
+ "vision_encoder.encoder.model.visual.blocks.25.attn.proj.bias": "model-00001-of-00002.safetensors",
468
+ "vision_encoder.encoder.model.visual.blocks.25.attn.proj.weight": "model-00001-of-00002.safetensors",
469
+ "vision_encoder.encoder.model.visual.blocks.25.attn.qkv.bias": "model-00001-of-00002.safetensors",
470
+ "vision_encoder.encoder.model.visual.blocks.25.attn.qkv.weight": "model-00001-of-00002.safetensors",
471
+ "vision_encoder.encoder.model.visual.blocks.25.mlp.fc1.bias": "model-00001-of-00002.safetensors",
472
+ "vision_encoder.encoder.model.visual.blocks.25.mlp.fc1.weight": "model-00001-of-00002.safetensors",
473
+ "vision_encoder.encoder.model.visual.blocks.25.mlp.fc2.bias": "model-00001-of-00002.safetensors",
474
+ "vision_encoder.encoder.model.visual.blocks.25.mlp.fc2.weight": "model-00001-of-00002.safetensors",
475
+ "vision_encoder.encoder.model.visual.blocks.25.norm1.bias": "model-00001-of-00002.safetensors",
476
+ "vision_encoder.encoder.model.visual.blocks.25.norm1.weight": "model-00001-of-00002.safetensors",
477
+ "vision_encoder.encoder.model.visual.blocks.25.norm2.bias": "model-00001-of-00002.safetensors",
478
+ "vision_encoder.encoder.model.visual.blocks.25.norm2.weight": "model-00001-of-00002.safetensors",
479
+ "vision_encoder.encoder.model.visual.blocks.26.attn.proj.bias": "model-00001-of-00002.safetensors",
480
+ "vision_encoder.encoder.model.visual.blocks.26.attn.proj.weight": "model-00001-of-00002.safetensors",
481
+ "vision_encoder.encoder.model.visual.blocks.26.attn.qkv.bias": "model-00001-of-00002.safetensors",
482
+ "vision_encoder.encoder.model.visual.blocks.26.attn.qkv.weight": "model-00001-of-00002.safetensors",
483
+ "vision_encoder.encoder.model.visual.blocks.26.mlp.fc1.bias": "model-00001-of-00002.safetensors",
484
+ "vision_encoder.encoder.model.visual.blocks.26.mlp.fc1.weight": "model-00001-of-00002.safetensors",
485
+ "vision_encoder.encoder.model.visual.blocks.26.mlp.fc2.bias": "model-00001-of-00002.safetensors",
486
+ "vision_encoder.encoder.model.visual.blocks.26.mlp.fc2.weight": "model-00001-of-00002.safetensors",
487
+ "vision_encoder.encoder.model.visual.blocks.26.norm1.bias": "model-00001-of-00002.safetensors",
488
+ "vision_encoder.encoder.model.visual.blocks.26.norm1.weight": "model-00001-of-00002.safetensors",
489
+ "vision_encoder.encoder.model.visual.blocks.26.norm2.bias": "model-00001-of-00002.safetensors",
490
+ "vision_encoder.encoder.model.visual.blocks.26.norm2.weight": "model-00001-of-00002.safetensors",
491
+ "vision_encoder.encoder.model.visual.blocks.3.attn.proj.bias": "model-00001-of-00002.safetensors",
492
+ "vision_encoder.encoder.model.visual.blocks.3.attn.proj.weight": "model-00001-of-00002.safetensors",
493
+ "vision_encoder.encoder.model.visual.blocks.3.attn.qkv.bias": "model-00001-of-00002.safetensors",
494
+ "vision_encoder.encoder.model.visual.blocks.3.attn.qkv.weight": "model-00001-of-00002.safetensors",
495
+ "vision_encoder.encoder.model.visual.blocks.3.mlp.fc1.bias": "model-00001-of-00002.safetensors",
496
+ "vision_encoder.encoder.model.visual.blocks.3.mlp.fc1.weight": "model-00001-of-00002.safetensors",
497
+ "vision_encoder.encoder.model.visual.blocks.3.mlp.fc2.bias": "model-00001-of-00002.safetensors",
498
+ "vision_encoder.encoder.model.visual.blocks.3.mlp.fc2.weight": "model-00001-of-00002.safetensors",
499
+ "vision_encoder.encoder.model.visual.blocks.3.norm1.bias": "model-00001-of-00002.safetensors",
500
+ "vision_encoder.encoder.model.visual.blocks.3.norm1.weight": "model-00001-of-00002.safetensors",
501
+ "vision_encoder.encoder.model.visual.blocks.3.norm2.bias": "model-00001-of-00002.safetensors",
502
+ "vision_encoder.encoder.model.visual.blocks.3.norm2.weight": "model-00001-of-00002.safetensors",
503
+ "vision_encoder.encoder.model.visual.blocks.4.attn.proj.bias": "model-00001-of-00002.safetensors",
504
+ "vision_encoder.encoder.model.visual.blocks.4.attn.proj.weight": "model-00001-of-00002.safetensors",
505
+ "vision_encoder.encoder.model.visual.blocks.4.attn.qkv.bias": "model-00001-of-00002.safetensors",
506
+ "vision_encoder.encoder.model.visual.blocks.4.attn.qkv.weight": "model-00001-of-00002.safetensors",
507
+ "vision_encoder.encoder.model.visual.blocks.4.mlp.fc1.bias": "model-00001-of-00002.safetensors",
508
+ "vision_encoder.encoder.model.visual.blocks.4.mlp.fc1.weight": "model-00001-of-00002.safetensors",
509
+ "vision_encoder.encoder.model.visual.blocks.4.mlp.fc2.bias": "model-00001-of-00002.safetensors",
510
+ "vision_encoder.encoder.model.visual.blocks.4.mlp.fc2.weight": "model-00001-of-00002.safetensors",
511
+ "vision_encoder.encoder.model.visual.blocks.4.norm1.bias": "model-00001-of-00002.safetensors",
512
+ "vision_encoder.encoder.model.visual.blocks.4.norm1.weight": "model-00001-of-00002.safetensors",
513
+ "vision_encoder.encoder.model.visual.blocks.4.norm2.bias": "model-00001-of-00002.safetensors",
514
+ "vision_encoder.encoder.model.visual.blocks.4.norm2.weight": "model-00001-of-00002.safetensors",
515
+ "vision_encoder.encoder.model.visual.blocks.5.attn.proj.bias": "model-00001-of-00002.safetensors",
516
+ "vision_encoder.encoder.model.visual.blocks.5.attn.proj.weight": "model-00001-of-00002.safetensors",
517
+ "vision_encoder.encoder.model.visual.blocks.5.attn.qkv.bias": "model-00001-of-00002.safetensors",
518
+ "vision_encoder.encoder.model.visual.blocks.5.attn.qkv.weight": "model-00001-of-00002.safetensors",
519
+ "vision_encoder.encoder.model.visual.blocks.5.mlp.fc1.bias": "model-00001-of-00002.safetensors",
520
+ "vision_encoder.encoder.model.visual.blocks.5.mlp.fc1.weight": "model-00001-of-00002.safetensors",
521
+ "vision_encoder.encoder.model.visual.blocks.5.mlp.fc2.bias": "model-00001-of-00002.safetensors",
522
+ "vision_encoder.encoder.model.visual.blocks.5.mlp.fc2.weight": "model-00001-of-00002.safetensors",
523
+ "vision_encoder.encoder.model.visual.blocks.5.norm1.bias": "model-00001-of-00002.safetensors",
524
+ "vision_encoder.encoder.model.visual.blocks.5.norm1.weight": "model-00001-of-00002.safetensors",
525
+ "vision_encoder.encoder.model.visual.blocks.5.norm2.bias": "model-00001-of-00002.safetensors",
526
+ "vision_encoder.encoder.model.visual.blocks.5.norm2.weight": "model-00001-of-00002.safetensors",
527
+ "vision_encoder.encoder.model.visual.blocks.6.attn.proj.bias": "model-00001-of-00002.safetensors",
528
+ "vision_encoder.encoder.model.visual.blocks.6.attn.proj.weight": "model-00001-of-00002.safetensors",
529
+ "vision_encoder.encoder.model.visual.blocks.6.attn.qkv.bias": "model-00001-of-00002.safetensors",
530
+ "vision_encoder.encoder.model.visual.blocks.6.attn.qkv.weight": "model-00001-of-00002.safetensors",
531
+ "vision_encoder.encoder.model.visual.blocks.6.mlp.fc1.bias": "model-00001-of-00002.safetensors",
532
+ "vision_encoder.encoder.model.visual.blocks.6.mlp.fc1.weight": "model-00001-of-00002.safetensors",
533
+ "vision_encoder.encoder.model.visual.blocks.6.mlp.fc2.bias": "model-00001-of-00002.safetensors",
534
+ "vision_encoder.encoder.model.visual.blocks.6.mlp.fc2.weight": "model-00001-of-00002.safetensors",
535
+ "vision_encoder.encoder.model.visual.blocks.6.norm1.bias": "model-00001-of-00002.safetensors",
536
+ "vision_encoder.encoder.model.visual.blocks.6.norm1.weight": "model-00001-of-00002.safetensors",
537
+ "vision_encoder.encoder.model.visual.blocks.6.norm2.bias": "model-00001-of-00002.safetensors",
538
+ "vision_encoder.encoder.model.visual.blocks.6.norm2.weight": "model-00001-of-00002.safetensors",
539
+ "vision_encoder.encoder.model.visual.blocks.7.attn.proj.bias": "model-00001-of-00002.safetensors",
540
+ "vision_encoder.encoder.model.visual.blocks.7.attn.proj.weight": "model-00001-of-00002.safetensors",
541
+ "vision_encoder.encoder.model.visual.blocks.7.attn.qkv.bias": "model-00001-of-00002.safetensors",
542
+ "vision_encoder.encoder.model.visual.blocks.7.attn.qkv.weight": "model-00001-of-00002.safetensors",
543
+ "vision_encoder.encoder.model.visual.blocks.7.mlp.fc1.bias": "model-00001-of-00002.safetensors",
544
+ "vision_encoder.encoder.model.visual.blocks.7.mlp.fc1.weight": "model-00001-of-00002.safetensors",
545
+ "vision_encoder.encoder.model.visual.blocks.7.mlp.fc2.bias": "model-00001-of-00002.safetensors",
546
+ "vision_encoder.encoder.model.visual.blocks.7.mlp.fc2.weight": "model-00001-of-00002.safetensors",
547
+ "vision_encoder.encoder.model.visual.blocks.7.norm1.bias": "model-00001-of-00002.safetensors",
548
+ "vision_encoder.encoder.model.visual.blocks.7.norm1.weight": "model-00001-of-00002.safetensors",
549
+ "vision_encoder.encoder.model.visual.blocks.7.norm2.bias": "model-00001-of-00002.safetensors",
550
+ "vision_encoder.encoder.model.visual.blocks.7.norm2.weight": "model-00001-of-00002.safetensors",
551
+ "vision_encoder.encoder.model.visual.blocks.8.attn.proj.bias": "model-00001-of-00002.safetensors",
552
+ "vision_encoder.encoder.model.visual.blocks.8.attn.proj.weight": "model-00001-of-00002.safetensors",
553
+ "vision_encoder.encoder.model.visual.blocks.8.attn.qkv.bias": "model-00001-of-00002.safetensors",
554
+ "vision_encoder.encoder.model.visual.blocks.8.attn.qkv.weight": "model-00001-of-00002.safetensors",
555
+ "vision_encoder.encoder.model.visual.blocks.8.mlp.fc1.bias": "model-00001-of-00002.safetensors",
556
+ "vision_encoder.encoder.model.visual.blocks.8.mlp.fc1.weight": "model-00001-of-00002.safetensors",
557
+ "vision_encoder.encoder.model.visual.blocks.8.mlp.fc2.bias": "model-00001-of-00002.safetensors",
558
+ "vision_encoder.encoder.model.visual.blocks.8.mlp.fc2.weight": "model-00001-of-00002.safetensors",
559
+ "vision_encoder.encoder.model.visual.blocks.8.norm1.bias": "model-00001-of-00002.safetensors",
560
+ "vision_encoder.encoder.model.visual.blocks.8.norm1.weight": "model-00001-of-00002.safetensors",
561
+ "vision_encoder.encoder.model.visual.blocks.8.norm2.bias": "model-00001-of-00002.safetensors",
562
+ "vision_encoder.encoder.model.visual.blocks.8.norm2.weight": "model-00001-of-00002.safetensors",
563
+ "vision_encoder.encoder.model.visual.blocks.9.attn.proj.bias": "model-00001-of-00002.safetensors",
564
+ "vision_encoder.encoder.model.visual.blocks.9.attn.proj.weight": "model-00001-of-00002.safetensors",
565
+ "vision_encoder.encoder.model.visual.blocks.9.attn.qkv.bias": "model-00001-of-00002.safetensors",
566
+ "vision_encoder.encoder.model.visual.blocks.9.attn.qkv.weight": "model-00001-of-00002.safetensors",
567
+ "vision_encoder.encoder.model.visual.blocks.9.mlp.fc1.bias": "model-00001-of-00002.safetensors",
568
+ "vision_encoder.encoder.model.visual.blocks.9.mlp.fc1.weight": "model-00001-of-00002.safetensors",
569
+ "vision_encoder.encoder.model.visual.blocks.9.mlp.fc2.bias": "model-00001-of-00002.safetensors",
570
+ "vision_encoder.encoder.model.visual.blocks.9.mlp.fc2.weight": "model-00001-of-00002.safetensors",
571
+ "vision_encoder.encoder.model.visual.blocks.9.norm1.bias": "model-00001-of-00002.safetensors",
572
+ "vision_encoder.encoder.model.visual.blocks.9.norm1.weight": "model-00001-of-00002.safetensors",
573
+ "vision_encoder.encoder.model.visual.blocks.9.norm2.bias": "model-00001-of-00002.safetensors",
574
+ "vision_encoder.encoder.model.visual.blocks.9.norm2.weight": "model-00001-of-00002.safetensors",
575
+ "vision_encoder.encoder.model.visual.norm.bias": "model-00001-of-00002.safetensors",
576
+ "vision_encoder.encoder.model.visual.norm.weight": "model-00001-of-00002.safetensors",
577
+ "vision_encoder.encoder.model.visual.patch_embed.linear.bias": "model-00001-of-00002.safetensors",
578
+ "vision_encoder.encoder.model.visual.patch_embed.linear.weight": "model-00001-of-00002.safetensors",
579
+ "vision_encoder.encoder.model.visual.pos_embed": "model-00001-of-00002.safetensors",
580
+ "vision_encoder.projection.mlp.fc1.bias": "model-00001-of-00002.safetensors",
581
+ "vision_encoder.projection.mlp.fc1.weight": "model-00001-of-00002.safetensors",
582
+ "vision_encoder.projection.mlp.fc2.bias": "model-00001-of-00002.safetensors",
583
+ "vision_encoder.projection.mlp.fc2.weight": "model-00001-of-00002.safetensors"
584
+ }
585
+ }
modeling_phi.py ADDED
@@ -0,0 +1,1190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ PyTorch Phi model."""
17
+
18
+
19
+ import math
20
+ from typing import List, Optional, Tuple, Union
21
+
22
+ import torch
23
+ import torch.nn.functional as F
24
+ import torch.utils.checkpoint
25
+ from torch import nn
26
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
27
+
28
+ from transformers.activations import ACT2FN
29
+ from transformers.cache_utils import Cache, DynamicCache
30
+ from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask
31
+ from transformers.modeling_outputs import (
32
+ BaseModelOutputWithPast,
33
+ CausalLMOutputWithPast,
34
+ SequenceClassifierOutputWithPast,
35
+ )
36
+ from transformers.modeling_utils import PreTrainedModel
37
+ from transformers.utils import (
38
+ is_flash_attn_2_available,
39
+ is_flash_attn_greater_or_equal_2_10,
40
+ logging,
41
+ )
42
+ from .configuration_moondream import PhiConfig
43
+
44
+
45
+ try: # noqa: SIM105
46
+ if is_flash_attn_2_available():
47
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
48
+ from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input
49
+ except ImportError:
50
+ # Workaround for https://github.com/huggingface/transformers/issues/28459,
51
+ # don't move to contextlib.suppress(ImportError)
52
+ pass
53
+
54
+
55
+ logger = logging.get_logger(__name__)
56
+
57
+
58
+ # Copied from transformers.models.llama.modeling_llama._get_unpad_data
59
+ def _get_unpad_data(attention_mask):
60
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
61
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
62
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
63
+ cu_seqlens = F.pad(
64
+ torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0)
65
+ )
66
+ return (
67
+ indices,
68
+ cu_seqlens,
69
+ max_seqlen_in_batch,
70
+ )
71
+
72
+
73
+ # Copied from transformers.models.llama.modeling_llama.LlamaRotaryEmbedding with Llama->Phi
74
+ class PhiRotaryEmbedding(nn.Module):
75
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
76
+ super().__init__()
77
+
78
+ self.dim = dim
79
+ self.max_position_embeddings = max_position_embeddings
80
+ self.base = base
81
+ inv_freq = 1.0 / (
82
+ self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)
83
+ )
84
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
85
+
86
+ # Build here to make `torch.jit.trace` work.
87
+ self._set_cos_sin_cache(
88
+ seq_len=max_position_embeddings,
89
+ device=self.inv_freq.device,
90
+ dtype=torch.get_default_dtype(),
91
+ )
92
+
93
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
94
+ self.max_seq_len_cached = seq_len
95
+ t = torch.arange(
96
+ self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype
97
+ )
98
+
99
+ freqs = torch.outer(t, self.inv_freq)
100
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
101
+ emb = torch.cat((freqs, freqs), dim=-1)
102
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
103
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
104
+
105
+ def forward(self, x, seq_len=None):
106
+ # x: [bs, num_attention_heads, seq_len, head_size]
107
+ if seq_len > self.max_seq_len_cached:
108
+ self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)
109
+
110
+ return (
111
+ self.cos_cached[:seq_len].to(dtype=x.dtype),
112
+ self.sin_cached[:seq_len].to(dtype=x.dtype),
113
+ )
114
+
115
+
116
+ # Copied from transformers.models.llama.modeling_llama.LlamaLinearScalingRotaryEmbedding with Llama->Phi
117
+ class PhiLinearScalingRotaryEmbedding(PhiRotaryEmbedding):
118
+ """PhiRotaryEmbedding extended with linear scaling. Credits to the Reddit user /u/kaiokendev"""
119
+
120
+ def __init__(
121
+ self,
122
+ dim,
123
+ max_position_embeddings=2048,
124
+ base=10000,
125
+ device=None,
126
+ scaling_factor=1.0,
127
+ ):
128
+ self.scaling_factor = scaling_factor
129
+ super().__init__(dim, max_position_embeddings, base, device)
130
+
131
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
132
+ self.max_seq_len_cached = seq_len
133
+ t = torch.arange(
134
+ self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype
135
+ )
136
+ t = t / self.scaling_factor
137
+
138
+ freqs = torch.outer(t, self.inv_freq)
139
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
140
+ emb = torch.cat((freqs, freqs), dim=-1)
141
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
142
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
143
+
144
+
145
+ # Copied from transformers.models.llama.modeling_llama.LlamaDynamicNTKScalingRotaryEmbedding with Llama->Phi
146
+ class PhiDynamicNTKScalingRotaryEmbedding(PhiRotaryEmbedding):
147
+ """PhiRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla"""
148
+
149
+ def __init__(
150
+ self,
151
+ dim,
152
+ max_position_embeddings=2048,
153
+ base=10000,
154
+ device=None,
155
+ scaling_factor=1.0,
156
+ ):
157
+ self.scaling_factor = scaling_factor
158
+ super().__init__(dim, max_position_embeddings, base, device)
159
+
160
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
161
+ self.max_seq_len_cached = seq_len
162
+
163
+ if seq_len > self.max_position_embeddings:
164
+ base = self.base * (
165
+ (self.scaling_factor * seq_len / self.max_position_embeddings)
166
+ - (self.scaling_factor - 1)
167
+ ) ** (self.dim / (self.dim - 2))
168
+ inv_freq = 1.0 / (
169
+ base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)
170
+ )
171
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
172
+
173
+ t = torch.arange(
174
+ self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype
175
+ )
176
+
177
+ freqs = torch.outer(t, self.inv_freq)
178
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
179
+ emb = torch.cat((freqs, freqs), dim=-1)
180
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
181
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
182
+
183
+
184
+ # Copied from transformers.models.llama.modeling_llama.rotate_half
185
+ def rotate_half(x):
186
+ """Rotates half the hidden dims of the input."""
187
+ x1 = x[..., : x.shape[-1] // 2]
188
+ x2 = x[..., x.shape[-1] // 2 :]
189
+ return torch.cat((-x2, x1), dim=-1)
190
+
191
+
192
+ # Copied from transformers.models.llama.modeling_llama.apply_rotary_pos_emb
193
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
194
+ """Applies Rotary Position Embedding to the query and key tensors.
195
+
196
+ Args:
197
+ q (`torch.Tensor`): The query tensor.
198
+ k (`torch.Tensor`): The key tensor.
199
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
200
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
201
+ position_ids (`torch.Tensor`):
202
+ The position indices of the tokens corresponding to the query and key tensors. For example, this can be
203
+ used to pass offsetted position ids when working with a KV-cache.
204
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
205
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
206
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
207
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
208
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
209
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
210
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
211
+ Returns:
212
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
213
+ """
214
+ cos = cos[position_ids].unsqueeze(unsqueeze_dim)
215
+ sin = sin[position_ids].unsqueeze(unsqueeze_dim)
216
+ q_embed = (q * cos) + (rotate_half(q) * sin)
217
+ k_embed = (k * cos) + (rotate_half(k) * sin)
218
+ return q_embed, k_embed
219
+
220
+
221
+ # Copied from transformers.models.clip.modeling_clip.CLIPMLP with CLIP->Phi
222
+ class PhiMLP(nn.Module):
223
+ def __init__(self, config):
224
+ super().__init__()
225
+ self.config = config
226
+ self.activation_fn = ACT2FN[config.hidden_act]
227
+ self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size)
228
+ self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size)
229
+
230
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
231
+ hidden_states = self.fc1(hidden_states)
232
+ hidden_states = self.activation_fn(hidden_states)
233
+ hidden_states = self.fc2(hidden_states)
234
+ return hidden_states
235
+
236
+
237
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv with llama->phi
238
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
239
+ """
240
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
241
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
242
+ """
243
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
244
+ if n_rep == 1:
245
+ return hidden_states
246
+ hidden_states = hidden_states[:, :, None, :, :].expand(
247
+ batch, num_key_value_heads, n_rep, slen, head_dim
248
+ )
249
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
250
+
251
+
252
+ class PhiAttention(nn.Module):
253
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
254
+
255
+ def __init__(self, config: PhiConfig, layer_idx: Optional[int] = None):
256
+ super().__init__()
257
+ self.config = config
258
+ self.layer_idx = layer_idx
259
+ if layer_idx is None:
260
+ logger.warning_once(
261
+ f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will "
262
+ "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` "
263
+ "when creating this class."
264
+ )
265
+
266
+ self.attention_dropout = config.attention_dropout
267
+ self.hidden_size = config.hidden_size
268
+ self.num_heads = config.num_attention_heads
269
+ self.head_dim = self.hidden_size // self.num_heads
270
+ self.num_key_value_heads = config.num_key_value_heads
271
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
272
+ self.max_position_embeddings = config.max_position_embeddings
273
+ self.rope_theta = config.rope_theta
274
+ self.partial_rotary_factor = config.partial_rotary_factor
275
+ self.is_causal = True
276
+
277
+ if (self.head_dim * self.num_heads) != self.hidden_size:
278
+ raise ValueError(
279
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
280
+ f" and `num_heads`: {self.num_heads})."
281
+ )
282
+
283
+ self.Wqkv = nn.Linear(
284
+ self.hidden_size, 3 * self.num_heads * self.head_dim, bias=True
285
+ )
286
+ self.out_proj = nn.Linear(
287
+ self.num_heads * self.head_dim, self.hidden_size, bias=True
288
+ )
289
+
290
+ self.qk_layernorm = config.qk_layernorm
291
+ if self.qk_layernorm:
292
+ self.q_layernorm = nn.LayerNorm(
293
+ config.hidden_size // self.num_heads,
294
+ eps=config.layer_norm_eps,
295
+ elementwise_affine=True,
296
+ )
297
+ self.k_layernorm = nn.LayerNorm(
298
+ config.hidden_size // self.num_heads,
299
+ eps=config.layer_norm_eps,
300
+ elementwise_affine=True,
301
+ )
302
+
303
+ self._init_rope()
304
+
305
+ def _init_rope(self):
306
+ if self.config.rope_scaling is None:
307
+ self.rotary_emb = PhiRotaryEmbedding(
308
+ int(self.partial_rotary_factor * self.head_dim),
309
+ max_position_embeddings=self.max_position_embeddings,
310
+ base=self.rope_theta,
311
+ )
312
+ else:
313
+ scaling_type = self.config.rope_scaling["type"]
314
+ scaling_factor = self.config.rope_scaling["factor"]
315
+ if scaling_type == "linear":
316
+ self.rotary_emb = PhiLinearScalingRotaryEmbedding(
317
+ int(self.partial_rotary_factor * self.head_dim),
318
+ max_position_embeddings=self.max_position_embeddings,
319
+ scaling_factor=scaling_factor,
320
+ base=self.rope_theta,
321
+ )
322
+ elif scaling_type == "dynamic":
323
+ self.rotary_emb = PhiDynamicNTKScalingRotaryEmbedding(
324
+ int(self.partial_rotary_factor * self.head_dim),
325
+ max_position_embeddings=self.max_position_embeddings,
326
+ scaling_factor=scaling_factor,
327
+ base=self.rope_theta,
328
+ )
329
+ else:
330
+ raise ValueError(f"Unknown RoPE scaling type {scaling_type}")
331
+
332
+ def forward(
333
+ self,
334
+ hidden_states: torch.Tensor,
335
+ attention_mask: Optional[torch.Tensor] = None,
336
+ position_ids: Optional[torch.LongTensor] = None,
337
+ past_key_value: Optional[Cache] = None,
338
+ output_attentions: bool = False,
339
+ use_cache: bool = False,
340
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
341
+ bsz, q_len, _ = hidden_states.size()
342
+
343
+ query_states, key_states, value_states = self.Wqkv(hidden_states).chunk(
344
+ 3, dim=-1
345
+ )
346
+
347
+ if self.qk_layernorm:
348
+ query_states = self.q_layernorm(query_states)
349
+ key_states = self.k_layernorm(key_states)
350
+
351
+ query_states = query_states.view(
352
+ bsz, q_len, self.num_heads, self.head_dim
353
+ ).transpose(1, 2)
354
+ key_states = key_states.view(
355
+ bsz, q_len, self.num_key_value_heads, self.head_dim
356
+ ).transpose(1, 2)
357
+ value_states = value_states.view(
358
+ bsz, q_len, self.num_key_value_heads, self.head_dim
359
+ ).transpose(1, 2)
360
+
361
+ kv_seq_len = key_states.shape[-2]
362
+ if past_key_value is not None:
363
+ if self.layer_idx is None:
364
+ raise ValueError(
365
+ f"The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} "
366
+ "for auto-regressive decoding with k/v caching, please make sure to initialize the attention class "
367
+ "with a layer index."
368
+ )
369
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
370
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
371
+
372
+ # Partial rotary embedding
373
+ query_rot, query_pass = (
374
+ query_states[..., : self.rotary_emb.dim],
375
+ query_states[..., self.rotary_emb.dim :],
376
+ )
377
+ key_rot, key_pass = (
378
+ key_states[..., : self.rotary_emb.dim],
379
+ key_states[..., self.rotary_emb.dim :],
380
+ )
381
+ # [batch_size, seq_length, num_heads, head_dim // config.partial_rotary_factor]
382
+ query_rot, key_rot = apply_rotary_pos_emb(
383
+ query_rot, key_rot, cos, sin, position_ids
384
+ )
385
+
386
+ # [batch_size, seq_length, num_heads, head_dim]
387
+ query_states = torch.cat((query_rot, query_pass), dim=-1)
388
+ key_states = torch.cat((key_rot, key_pass), dim=-1)
389
+
390
+ if past_key_value is not None:
391
+ cache_kwargs = {
392
+ "sin": sin,
393
+ "cos": cos,
394
+ "partial_rotation_size": self.rotary_emb.dim,
395
+ }
396
+ key_states, value_states = past_key_value.update(
397
+ key_states, value_states, self.layer_idx, cache_kwargs
398
+ )
399
+
400
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
401
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
402
+
403
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
404
+ query_states, key_states, value_states, attn_mask=attention_mask
405
+ )
406
+
407
+ attn_output = attn_output.transpose(1, 2).contiguous()
408
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
409
+
410
+ attn_output = self.out_proj(attn_output)
411
+
412
+ if not output_attentions:
413
+ attn_weights = None
414
+
415
+ return attn_output, attn_weights, past_key_value
416
+
417
+
418
+ class PhiFlashAttention2(PhiAttention):
419
+ """
420
+ Phi flash attention module. This module inherits from `PhiAttention` as the weights of the module stays
421
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
422
+ flash attention and deal with padding tokens in case the input contains any of them.
423
+ """
424
+
425
+ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2.__init__
426
+ def __init__(self, *args, **kwargs):
427
+ super().__init__(*args, **kwargs)
428
+
429
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
430
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
431
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
432
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
433
+
434
+ def forward(
435
+ self,
436
+ hidden_states: torch.Tensor,
437
+ attention_mask: Optional[torch.LongTensor] = None,
438
+ position_ids: Optional[torch.LongTensor] = None,
439
+ past_key_value: Optional[Cache] = None,
440
+ output_attentions: bool = False,
441
+ use_cache: bool = False,
442
+ **kwargs,
443
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
444
+ # PhiFlashAttention2 attention does not support output_attentions
445
+
446
+ output_attentions = False
447
+
448
+ bsz, q_len, _ = hidden_states.size()
449
+
450
+ query_states, key_states, value_states = self.Wqkv(hidden_states).chunk(
451
+ 3, dim=-1
452
+ )
453
+
454
+ if self.qk_layernorm:
455
+ query_states = self.q_layernorm(query_states)
456
+ key_states = self.k_layernorm(key_states)
457
+
458
+ # Flash attention requires the input to have the shape
459
+ # batch_size x seq_length x head_dim x hidden_dim
460
+ # therefore we just need to keep the original shape
461
+ query_states = query_states.view(
462
+ bsz, q_len, self.num_heads, self.head_dim
463
+ ).transpose(1, 2)
464
+ key_states = key_states.view(
465
+ bsz, q_len, self.num_key_value_heads, self.head_dim
466
+ ).transpose(1, 2)
467
+ value_states = value_states.view(
468
+ bsz, q_len, self.num_key_value_heads, self.head_dim
469
+ ).transpose(1, 2)
470
+
471
+ kv_seq_len = key_states.shape[-2]
472
+ if past_key_value is not None:
473
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
474
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
475
+
476
+ # Partial rotary embedding
477
+ query_rot, query_pass = (
478
+ query_states[..., : self.rotary_emb.dim],
479
+ query_states[..., self.rotary_emb.dim :],
480
+ )
481
+ key_rot, key_pass = (
482
+ key_states[..., : self.rotary_emb.dim],
483
+ key_states[..., self.rotary_emb.dim :],
484
+ )
485
+ # [batch_size, seq_length, num_heads, head_dim // config.partial_rotary_factor]
486
+ query_rot, key_rot = apply_rotary_pos_emb(
487
+ query_rot, key_rot, cos, sin, position_ids
488
+ )
489
+
490
+ # [batch_size, seq_length, num_heads, head_dim]
491
+ query_states = torch.cat((query_rot, query_pass), dim=-1)
492
+ key_states = torch.cat((key_rot, key_pass), dim=-1)
493
+
494
+ if past_key_value is not None:
495
+ cache_kwargs = {
496
+ "sin": sin,
497
+ "cos": cos,
498
+ "partial_rotation_size": self.rotary_emb.dim,
499
+ }
500
+ key_states, value_states = past_key_value.update(
501
+ key_states, value_states, self.layer_idx, cache_kwargs
502
+ )
503
+
504
+ # TODO: These transpose are quite inefficient but Flash Attention requires the layout [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
505
+ # to be able to avoid many of these transpose/reshape/view.
506
+ query_states = query_states.transpose(1, 2)
507
+ key_states = key_states.transpose(1, 2)
508
+ value_states = value_states.transpose(1, 2)
509
+
510
+ attn_dropout = self.attention_dropout if self.training else 0.0
511
+
512
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
513
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
514
+ # cast them back in the correct dtype just to be sure everything works as expected.
515
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
516
+ # in fp32.
517
+
518
+ if query_states.dtype == torch.float32:
519
+ if torch.is_autocast_enabled():
520
+ target_dtype = torch.get_autocast_gpu_dtype()
521
+ # Handle the case where the model is quantized
522
+ elif hasattr(self.config, "_pre_quantization_dtype"):
523
+ target_dtype = self.config._pre_quantization_dtype
524
+ else:
525
+ target_dtype = self.q_proj.weight.dtype
526
+
527
+ logger.warning_once(
528
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
529
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
530
+ f" {target_dtype}."
531
+ )
532
+
533
+ query_states = query_states.to(target_dtype)
534
+ key_states = key_states.to(target_dtype)
535
+ value_states = value_states.to(target_dtype)
536
+
537
+ attn_output = self._flash_attention_forward(
538
+ query_states,
539
+ key_states,
540
+ value_states,
541
+ attention_mask,
542
+ q_len,
543
+ dropout=attn_dropout,
544
+ softmax_scale=None,
545
+ )
546
+
547
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
548
+ attn_output = self.out_proj(attn_output)
549
+
550
+ if not output_attentions:
551
+ attn_weights = None
552
+
553
+ return attn_output, attn_weights, past_key_value
554
+
555
+ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2._flash_attention_forward
556
+ def _flash_attention_forward(
557
+ self,
558
+ query_states,
559
+ key_states,
560
+ value_states,
561
+ attention_mask,
562
+ query_length,
563
+ dropout=0.0,
564
+ softmax_scale=None,
565
+ ):
566
+ """
567
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
568
+ first unpad the input, then computes the attention scores and pad the final attention scores.
569
+
570
+ Args:
571
+ query_states (`torch.Tensor`):
572
+ Input query states to be passed to Flash Attention API
573
+ key_states (`torch.Tensor`):
574
+ Input key states to be passed to Flash Attention API
575
+ value_states (`torch.Tensor`):
576
+ Input value states to be passed to Flash Attention API
577
+ attention_mask (`torch.Tensor`):
578
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
579
+ position of padding tokens and 1 for the position of non-padding tokens.
580
+ dropout (`int`, *optional*):
581
+ Attention dropout
582
+ softmax_scale (`float`, *optional*):
583
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
584
+ """
585
+ if not self._flash_attn_uses_top_left_mask:
586
+ causal = self.is_causal
587
+ else:
588
+ # TODO: Remove the `query_length != 1` check once Flash Attention for RoCm is bumped to 2.1. For details, please see the comment in LlamaFlashAttention2 __init__.
589
+ causal = self.is_causal and query_length != 1
590
+
591
+ # Contains at least one padding token in the sequence
592
+ if attention_mask is not None:
593
+ batch_size = query_states.shape[0]
594
+ (
595
+ query_states,
596
+ key_states,
597
+ value_states,
598
+ indices_q,
599
+ cu_seq_lens,
600
+ max_seq_lens,
601
+ ) = self._upad_input(
602
+ query_states, key_states, value_states, attention_mask, query_length
603
+ )
604
+
605
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
606
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
607
+
608
+ attn_output_unpad = flash_attn_varlen_func(
609
+ query_states,
610
+ key_states,
611
+ value_states,
612
+ cu_seqlens_q=cu_seqlens_q,
613
+ cu_seqlens_k=cu_seqlens_k,
614
+ max_seqlen_q=max_seqlen_in_batch_q,
615
+ max_seqlen_k=max_seqlen_in_batch_k,
616
+ dropout_p=dropout,
617
+ softmax_scale=softmax_scale,
618
+ causal=causal,
619
+ )
620
+
621
+ attn_output = pad_input(
622
+ attn_output_unpad, indices_q, batch_size, query_length
623
+ )
624
+ else:
625
+ attn_output = flash_attn_func(
626
+ query_states,
627
+ key_states,
628
+ value_states,
629
+ dropout,
630
+ softmax_scale=softmax_scale,
631
+ causal=causal,
632
+ )
633
+
634
+ return attn_output
635
+
636
+ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2._upad_input
637
+ def _upad_input(
638
+ self, query_layer, key_layer, value_layer, attention_mask, query_length
639
+ ):
640
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
641
+ batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape
642
+
643
+ key_layer = index_first_axis(
644
+ key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim),
645
+ indices_k,
646
+ )
647
+ value_layer = index_first_axis(
648
+ value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim),
649
+ indices_k,
650
+ )
651
+ if query_length == kv_seq_len:
652
+ query_layer = index_first_axis(
653
+ query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim),
654
+ indices_k,
655
+ )
656
+ cu_seqlens_q = cu_seqlens_k
657
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
658
+ indices_q = indices_k
659
+ elif query_length == 1:
660
+ max_seqlen_in_batch_q = 1
661
+ cu_seqlens_q = torch.arange(
662
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
663
+ ) # There is a memcpy here, that is very bad.
664
+ indices_q = cu_seqlens_q[:-1]
665
+ query_layer = query_layer.squeeze(1)
666
+ else:
667
+ # The -q_len: slice assumes left padding.
668
+ attention_mask = attention_mask[:, -query_length:]
669
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(
670
+ query_layer, attention_mask
671
+ )
672
+
673
+ return (
674
+ query_layer,
675
+ key_layer,
676
+ value_layer,
677
+ indices_q,
678
+ (cu_seqlens_q, cu_seqlens_k),
679
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
680
+ )
681
+
682
+
683
+ PHI_ATTENTION_CLASSES = {
684
+ "eager": PhiAttention,
685
+ "flash_attention_2": PhiFlashAttention2,
686
+ }
687
+
688
+
689
+ class PhiDecoderLayer(nn.Module):
690
+ def __init__(self, config: PhiConfig, layer_idx: int):
691
+ super().__init__()
692
+ self.mixer = PHI_ATTENTION_CLASSES[config._attn_implementation](
693
+ config, layer_idx=layer_idx
694
+ )
695
+ self.mlp = PhiMLP(config)
696
+ self.ln = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
697
+ self.resid_dropout = nn.Dropout(config.resid_pdrop)
698
+
699
+ def forward(
700
+ self,
701
+ hidden_states: torch.Tensor,
702
+ attention_mask: Optional[torch.Tensor] = None,
703
+ position_ids: Optional[torch.LongTensor] = None,
704
+ output_attentions: Optional[bool] = False,
705
+ use_cache: Optional[bool] = False,
706
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
707
+ ) -> Tuple[
708
+ torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]
709
+ ]:
710
+ """
711
+ Args:
712
+ hidden_states (`torch.FloatTensor`):
713
+ input to the layer of shape `(batch, seq_len, embed_dim)`
714
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
715
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
716
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
717
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
718
+ `[0, config.n_positions - 1]`. [What are position IDs?](../glossary#position-ids)
719
+ output_attentions (`bool`, *optional*):
720
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
721
+ returned tensors for more detail.
722
+ use_cache (`bool`, *optional*):
723
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
724
+ (see `past_key_values`).
725
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
726
+ """
727
+
728
+ residual = hidden_states
729
+
730
+ hidden_states = self.ln(hidden_states)
731
+
732
+ # Self Attention
733
+ attn_outputs, self_attn_weights, present_key_value = self.mixer(
734
+ hidden_states=hidden_states,
735
+ attention_mask=attention_mask,
736
+ position_ids=position_ids,
737
+ past_key_value=past_key_value,
738
+ output_attentions=output_attentions,
739
+ use_cache=use_cache,
740
+ )
741
+ attn_outputs = self.resid_dropout(attn_outputs)
742
+
743
+ feed_forward_hidden_states = self.resid_dropout(self.mlp(hidden_states))
744
+ hidden_states = attn_outputs + feed_forward_hidden_states + residual
745
+ outputs = (hidden_states,)
746
+
747
+ if output_attentions:
748
+ outputs += (self_attn_weights,)
749
+
750
+ if use_cache:
751
+ outputs += (present_key_value,)
752
+
753
+ return outputs
754
+
755
+
756
+ class PhiPreTrainedModel(PreTrainedModel):
757
+ config_class = PhiConfig
758
+ base_model_prefix = "model"
759
+ supports_gradient_checkpointing = True
760
+ _no_split_modules = ["PhiDecoderLayer"]
761
+ _skip_keys_device_placement = "past_key_values"
762
+ _supports_flash_attn_2 = True
763
+ _supports_cache_class = True
764
+
765
+ def _init_weights(self, module):
766
+ std = self.config.initializer_range
767
+ if isinstance(module, nn.Linear):
768
+ module.weight.data.normal_(mean=0.0, std=std)
769
+ if module.bias is not None:
770
+ module.bias.data.zero_()
771
+ elif isinstance(module, nn.Embedding):
772
+ module.weight.data.normal_(mean=0.0, std=std)
773
+ if module.padding_idx is not None:
774
+ module.weight.data[module.padding_idx].zero_()
775
+
776
+
777
+ class Embedding(nn.Module):
778
+ def __init__(self, config: PhiConfig):
779
+ super().__init__()
780
+ self.wte = nn.Embedding(
781
+ config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id
782
+ )
783
+
784
+ def forward(self, input_ids: torch.LongTensor) -> torch.FloatTensor:
785
+ return self.wte(input_ids)
786
+
787
+
788
+ class PhiModel(PhiPreTrainedModel):
789
+ """
790
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`PhiDecoderLayer`]
791
+
792
+ Args:
793
+ config: PhiConfig
794
+ """
795
+
796
+ def __init__(self, config: PhiConfig):
797
+ super().__init__(config)
798
+ self.padding_idx = config.pad_token_id
799
+ self.vocab_size = config.vocab_size
800
+
801
+ self.embd = Embedding(config)
802
+ self.embed_dropout = nn.Dropout(config.embd_pdrop)
803
+ self.h = nn.ModuleList(
804
+ [
805
+ PhiDecoderLayer(config, layer_idx)
806
+ for layer_idx in range(config.num_hidden_layers)
807
+ ]
808
+ )
809
+ self._use_flash_attention_2 = config._attn_implementation == "flash_attention_2"
810
+
811
+ self.gradient_checkpointing = False
812
+ # Initialize weights and apply final processing
813
+ self.post_init()
814
+
815
+ def get_input_embeddings(self):
816
+ return self.embd.wte
817
+
818
+ def set_input_embeddings(self, value):
819
+ self.embd.wte = value
820
+
821
+ def forward(
822
+ self,
823
+ input_ids: torch.LongTensor = None,
824
+ attention_mask: Optional[torch.Tensor] = None,
825
+ position_ids: Optional[torch.LongTensor] = None,
826
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
827
+ inputs_embeds: Optional[torch.FloatTensor] = None,
828
+ use_cache: Optional[bool] = None,
829
+ output_attentions: Optional[bool] = None,
830
+ output_hidden_states: Optional[bool] = None,
831
+ return_dict: Optional[bool] = None,
832
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
833
+ output_attentions = (
834
+ output_attentions
835
+ if output_attentions is not None
836
+ else self.config.output_attentions
837
+ )
838
+ output_hidden_states = (
839
+ output_hidden_states
840
+ if output_hidden_states is not None
841
+ else self.config.output_hidden_states
842
+ )
843
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
844
+
845
+ return_dict = (
846
+ return_dict if return_dict is not None else self.config.use_return_dict
847
+ )
848
+
849
+ # retrieve input_ids and inputs_embeds
850
+ if input_ids is not None and inputs_embeds is not None:
851
+ raise ValueError(
852
+ "You cannot specify both input_ids and inputs_embeds at the same time"
853
+ )
854
+ elif input_ids is not None:
855
+ batch_size, seq_length = input_ids.shape[:2]
856
+ elif inputs_embeds is not None:
857
+ batch_size, seq_length = inputs_embeds.shape[:2]
858
+ else:
859
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
860
+
861
+ past_key_values_length = 0
862
+
863
+ if self.gradient_checkpointing and self.training:
864
+ if use_cache:
865
+ logger.warning_once(
866
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
867
+ )
868
+ use_cache = False
869
+
870
+ if use_cache:
871
+ use_legacy_cache = not isinstance(past_key_values, Cache)
872
+ if use_legacy_cache:
873
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
874
+ past_key_values_length = past_key_values.get_usable_length(seq_length)
875
+
876
+ if position_ids is None:
877
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
878
+ position_ids = torch.arange(
879
+ past_key_values_length,
880
+ seq_length + past_key_values_length,
881
+ dtype=torch.long,
882
+ device=device,
883
+ )
884
+ position_ids = position_ids.unsqueeze(0)
885
+
886
+ if inputs_embeds is None:
887
+ inputs_embeds = self.embd(input_ids)
888
+
889
+ inputs_embeds = self.embed_dropout(inputs_embeds)
890
+
891
+ # Attention mask.
892
+ if self._use_flash_attention_2:
893
+ # 2d mask is passed through the layers
894
+ attention_mask = (
895
+ attention_mask
896
+ if (attention_mask is not None and 0 in attention_mask)
897
+ else None
898
+ )
899
+ else:
900
+ # 4d mask is passed through the layers
901
+ attention_mask = _prepare_4d_causal_attention_mask(
902
+ attention_mask,
903
+ (batch_size, seq_length),
904
+ inputs_embeds,
905
+ past_key_values_length,
906
+ )
907
+
908
+ hidden_states = inputs_embeds
909
+
910
+ # decoder layers
911
+ all_hidden_states = () if output_hidden_states else None
912
+ all_self_attns = () if output_attentions else None
913
+ next_decoder_cache = None
914
+
915
+ for decoder_layer in self.h:
916
+ if output_hidden_states:
917
+ all_hidden_states += (hidden_states,)
918
+
919
+ if self.gradient_checkpointing and self.training:
920
+ layer_outputs = self._gradient_checkpointing_func(
921
+ decoder_layer.__call__,
922
+ hidden_states,
923
+ attention_mask,
924
+ position_ids,
925
+ past_key_values,
926
+ output_attentions,
927
+ )
928
+ else:
929
+ layer_outputs = decoder_layer(
930
+ hidden_states,
931
+ attention_mask=attention_mask,
932
+ position_ids=position_ids,
933
+ past_key_value=past_key_values,
934
+ output_attentions=output_attentions,
935
+ use_cache=use_cache,
936
+ )
937
+
938
+ hidden_states = layer_outputs[0]
939
+
940
+ if use_cache:
941
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
942
+
943
+ if output_attentions:
944
+ all_self_attns += (layer_outputs[1],)
945
+
946
+ # add hidden states from the last decoder layer
947
+ if output_hidden_states:
948
+ all_hidden_states += (hidden_states,)
949
+
950
+ next_cache = None
951
+ if use_cache:
952
+ next_cache = (
953
+ next_decoder_cache.to_legacy_cache()
954
+ if use_legacy_cache
955
+ else next_decoder_cache
956
+ )
957
+ if not return_dict:
958
+ return tuple(
959
+ v
960
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns]
961
+ if v is not None
962
+ )
963
+ return BaseModelOutputWithPast(
964
+ last_hidden_state=hidden_states,
965
+ past_key_values=next_cache,
966
+ hidden_states=all_hidden_states,
967
+ attentions=all_self_attns,
968
+ )
969
+
970
+
971
+ class CausalLMHead(nn.Module):
972
+ """Causal Language Modeling head. Simplified version."""
973
+
974
+ def __init__(self, config):
975
+ super().__init__()
976
+ self.ln = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
977
+ self.linear = nn.Linear(config.hidden_size, config.vocab_size)
978
+
979
+ def forward(self, hidden_states):
980
+ return self.linear(self.ln(hidden_states))
981
+
982
+
983
+ class PhiForCausalLM(PhiPreTrainedModel):
984
+ _tied_weights_keys = ["lm_head.linear.weight"]
985
+
986
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.__init__ with Llama->Phi,bias=False->bias=True
987
+ def __init__(self, config):
988
+ super().__init__(config)
989
+ self.transformer = PhiModel(config)
990
+ self.vocab_size = config.vocab_size
991
+ self.lm_head = CausalLMHead(config)
992
+
993
+ # Initialize weights and apply final processing
994
+ self.post_init()
995
+
996
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_input_embeddings
997
+ def get_input_embeddings(self):
998
+ return self.transformer.embd.wte
999
+
1000
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_input_embeddings
1001
+ def set_input_embeddings(self, value):
1002
+ self.model.embd.wte = value
1003
+
1004
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_output_embeddings
1005
+ def get_output_embeddings(self):
1006
+ return self.lm_head.linear
1007
+
1008
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_output_embeddings
1009
+ def set_output_embeddings(self, new_embeddings):
1010
+ self.lm_head.linear = new_embeddings
1011
+
1012
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_decoder
1013
+ def set_decoder(self, decoder):
1014
+ self.model = decoder
1015
+
1016
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_decoder
1017
+ def get_decoder(self):
1018
+ return self.model
1019
+
1020
+ def forward(
1021
+ self,
1022
+ input_ids: torch.LongTensor = None,
1023
+ attention_mask: Optional[torch.Tensor] = None,
1024
+ position_ids: Optional[torch.LongTensor] = None,
1025
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1026
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1027
+ labels: Optional[torch.LongTensor] = None,
1028
+ use_cache: Optional[bool] = None,
1029
+ output_attentions: Optional[bool] = None,
1030
+ output_hidden_states: Optional[bool] = None,
1031
+ return_dict: Optional[bool] = None,
1032
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1033
+ r"""
1034
+ Args:
1035
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1036
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1037
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1038
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1039
+
1040
+ Returns:
1041
+
1042
+ Example:
1043
+
1044
+ ```python
1045
+ >>> from transformers import AutoTokenizer, PhiForCausalLM
1046
+
1047
+ >>> model = PhiForCausalLM.from_pretrained("microsoft/phi-1")
1048
+ >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1")
1049
+
1050
+ >>> prompt = "This is an example script ."
1051
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1052
+
1053
+ >>> # Generate
1054
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1055
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1056
+ 'This is an example script .\n\n\n\nfrom typing import List\n\ndef find_most_common_letter(words: List[str'
1057
+ ```"""
1058
+
1059
+ output_attentions = (
1060
+ output_attentions
1061
+ if output_attentions is not None
1062
+ else self.config.output_attentions
1063
+ )
1064
+ output_hidden_states = (
1065
+ output_hidden_states
1066
+ if output_hidden_states is not None
1067
+ else self.config.output_hidden_states
1068
+ )
1069
+ return_dict = (
1070
+ return_dict if return_dict is not None else self.config.use_return_dict
1071
+ )
1072
+
1073
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1074
+ outputs = self.transformer(
1075
+ input_ids=input_ids,
1076
+ attention_mask=attention_mask,
1077
+ position_ids=position_ids,
1078
+ past_key_values=past_key_values,
1079
+ inputs_embeds=inputs_embeds,
1080
+ use_cache=use_cache,
1081
+ output_attentions=output_attentions,
1082
+ output_hidden_states=output_hidden_states,
1083
+ return_dict=return_dict,
1084
+ )
1085
+
1086
+ hidden_states = outputs[0]
1087
+ logits = self.lm_head(hidden_states)
1088
+
1089
+ loss = None
1090
+ if labels is not None:
1091
+ # Shift so that tokens < n predict n
1092
+ shift_logits = logits[..., :-1, :].contiguous()
1093
+ shift_labels = labels[..., 1:].contiguous()
1094
+ # Flatten the tokens
1095
+ loss_fct = CrossEntropyLoss()
1096
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
1097
+ shift_labels = shift_labels.view(-1)
1098
+ # Enable model parallelism
1099
+ shift_labels = shift_labels.to(shift_logits.device)
1100
+ loss = loss_fct(shift_logits, shift_labels)
1101
+
1102
+ if not return_dict:
1103
+ output = (logits,) + outputs[1:]
1104
+ return (loss,) + output if loss is not None else output
1105
+
1106
+ return CausalLMOutputWithPast(
1107
+ loss=loss,
1108
+ logits=logits,
1109
+ past_key_values=outputs.past_key_values,
1110
+ hidden_states=outputs.hidden_states,
1111
+ attentions=outputs.attentions,
1112
+ )
1113
+
1114
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.prepare_inputs_for_generation
1115
+ def prepare_inputs_for_generation(
1116
+ self,
1117
+ input_ids,
1118
+ past_key_values=None,
1119
+ attention_mask=None,
1120
+ inputs_embeds=None,
1121
+ **kwargs,
1122
+ ):
1123
+ if past_key_values is not None:
1124
+ if isinstance(past_key_values, Cache):
1125
+ cache_length = past_key_values.get_seq_length()
1126
+ past_length = past_key_values.seen_tokens
1127
+ max_cache_length = past_key_values.get_max_length()
1128
+ else:
1129
+ cache_length = past_length = past_key_values[0][0].shape[2]
1130
+ max_cache_length = None
1131
+
1132
+ # Keep only the unprocessed tokens:
1133
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
1134
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
1135
+ # input)
1136
+ if (
1137
+ attention_mask is not None
1138
+ and attention_mask.shape[1] > input_ids.shape[1]
1139
+ ):
1140
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
1141
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
1142
+ # input_ids based on the past_length.
1143
+ elif past_length < input_ids.shape[1]:
1144
+ input_ids = input_ids[:, past_length:]
1145
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
1146
+
1147
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
1148
+ if (
1149
+ max_cache_length is not None
1150
+ and attention_mask is not None
1151
+ and cache_length + input_ids.shape[1] > max_cache_length
1152
+ ):
1153
+ attention_mask = attention_mask[:, -max_cache_length:]
1154
+
1155
+ position_ids = kwargs.get("position_ids", None)
1156
+ if attention_mask is not None and position_ids is None:
1157
+ # create position_ids on the fly for batch generation
1158
+ position_ids = attention_mask.long().cumsum(-1) - 1
1159
+ position_ids.masked_fill_(attention_mask == 0, 1)
1160
+ if past_key_values:
1161
+ position_ids = position_ids[:, -input_ids.shape[1] :]
1162
+
1163
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1164
+ if inputs_embeds is not None and past_key_values is None:
1165
+ model_inputs = {"inputs_embeds": inputs_embeds}
1166
+ else:
1167
+ model_inputs = {"input_ids": input_ids}
1168
+
1169
+ model_inputs.update(
1170
+ {
1171
+ "position_ids": position_ids,
1172
+ "past_key_values": past_key_values,
1173
+ "use_cache": kwargs.get("use_cache"),
1174
+ "attention_mask": attention_mask,
1175
+ }
1176
+ )
1177
+ return model_inputs
1178
+
1179
+ @staticmethod
1180
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM._reorder_cache
1181
+ def _reorder_cache(past_key_values, beam_idx):
1182
+ reordered_past = ()
1183
+ for layer_past in past_key_values:
1184
+ reordered_past += (
1185
+ tuple(
1186
+ past_state.index_select(0, beam_idx.to(past_state.device))
1187
+ for past_state in layer_past
1188
+ ),
1189
+ )
1190
+ return reordered_past
moondream.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from .vision_encoder import VisionEncoder
3
+ from .configuration_moondream import MoondreamConfig
4
+ from transformers import PreTrainedModel
5
+
6
+ from .modeling_phi import PhiForCausalLM
7
+ from .configuration_moondream import PhiConfig
8
+
9
+ class Moondream(PreTrainedModel):
10
+ config_class = MoondreamConfig
11
+ _supports_flash_attn_2 = True
12
+
13
+ def __init__(self, config):
14
+ super().__init__(config)
15
+ self.vision_encoder = VisionEncoder(
16
+ use_flash_attn=config._attn_implementation == "flash_attention_2"
17
+ )
18
+
19
+ if type(config.text_config) == dict:
20
+ phi_config = PhiConfig(
21
+ **config.text_config, attn_implementation=config._attn_implementation
22
+ )
23
+ else:
24
+ phi_config = config.text_config
25
+ self.text_model = PhiForCausalLM(phi_config)
26
+
27
+ @property
28
+ def device(self):
29
+ return self.text_model.device
30
+
31
+ def encode_image(self, image):
32
+ with torch.no_grad():
33
+ return self.vision_encoder(image)
34
+
35
+ def input_embeds(self, prompt, image_embeds, tokenizer):
36
+ def _tokenize(txt):
37
+ return tokenizer(
38
+ txt, return_tensors="pt", add_special_tokens=False
39
+ ).input_ids.to(self.device)
40
+
41
+ text_emb = self.text_model.get_input_embeddings()
42
+
43
+ # Add BOS token
44
+ embeds = []
45
+ embeds.append(
46
+ text_emb((torch.tensor([[tokenizer.bos_token_id]], device=self.device)))
47
+ )
48
+
49
+ if "<image>" not in prompt:
50
+ embeds.append(text_emb(_tokenize(prompt)))
51
+ else:
52
+ assert prompt.count("<image>") == 1
53
+ before, after = prompt.split("<image>")
54
+ if len(before) > 0:
55
+ embeds.append(text_emb(_tokenize(before)))
56
+ embeds.append(image_embeds.to(self.device))
57
+ if len(after) > 0:
58
+ embeds.append(text_emb(_tokenize(after)))
59
+
60
+ return torch.cat(embeds, dim=1)
61
+
62
+ def generate(
63
+ self,
64
+ image_embeds,
65
+ prompt,
66
+ tokenizer,
67
+ max_new_tokens=128,
68
+ **kwargs,
69
+ ):
70
+ generate_config = {
71
+ "eos_token_id": tokenizer.eos_token_id,
72
+ "bos_token_id": tokenizer.bos_token_id,
73
+ "pad_token_id": tokenizer.bos_token_id,
74
+ "max_new_tokens": max_new_tokens,
75
+ **kwargs,
76
+ }
77
+
78
+ with torch.no_grad():
79
+ inputs_embeds = self.input_embeds(prompt, image_embeds, tokenizer)
80
+ output_ids = self.text_model.generate(
81
+ inputs_embeds=inputs_embeds, **generate_config
82
+ )
83
+
84
+ return tokenizer.batch_decode(output_ids, skip_special_tokens=True)
85
+
86
+ def answer_question(
87
+ self,
88
+ image_embeds,
89
+ question,
90
+ tokenizer,
91
+ chat_history="",
92
+ result_queue=None,
93
+ **kwargs,
94
+ ):
95
+ prompt = f"<image>\n\n{chat_history}Question: {question}\n\nAnswer:"
96
+ answer = self.generate(
97
+ image_embeds,
98
+ prompt,
99
+ tokenizer=tokenizer,
100
+ max_new_tokens=512,
101
+ **kwargs,
102
+ )[0]
103
+ cleaned_answer = answer.strip()
104
+
105
+ # Use the result_queue to pass the result if it is provided
106
+ if result_queue:
107
+ result_queue.put(cleaned_answer)
108
+ else:
109
+ return cleaned_answer
110
+
111
+ def batch_answer(
112
+ self,
113
+ images,
114
+ prompts,
115
+ tokenizer,
116
+ **kwargs,
117
+ ):
118
+ image_embeds = self.encode_image(images)
119
+
120
+ templated_prompts = [
121
+ f"<image>\n\nQuestion: {prompt}\n\nAnswer:" for prompt in prompts
122
+ ]
123
+ prompt_embs = [
124
+ self.input_embeds(prompt, image_embed.unsqueeze(0), tokenizer)[0]
125
+ for prompt, image_embed in zip(templated_prompts, image_embeds)
126
+ ]
127
+
128
+ bos_emb = prompt_embs[0][0]
129
+ max_len = max([p.shape[0] for p in prompt_embs])
130
+
131
+ inputs_embeds = torch.cat(
132
+ [
133
+ torch.cat([bos_emb.repeat(max_len - p.shape[0], 1), p]).unsqueeze(0)
134
+ for p in prompt_embs
135
+ ],
136
+ dim=0,
137
+ )
138
+ attention_mask = torch.cat(
139
+ [
140
+ torch.cat(
141
+ [
142
+ torch.zeros(
143
+ 1,
144
+ max_len - p.shape[0],
145
+ device=self.device,
146
+ dtype=torch.long,
147
+ ),
148
+ torch.ones(1, p.shape[0], device=self.device, dtype=torch.long),
149
+ ],
150
+ dim=1,
151
+ )
152
+ for p in prompt_embs
153
+ ],
154
+ dim=0,
155
+ )
156
+
157
+ generate_config = {
158
+ "eos_token_id": tokenizer.eos_token_id,
159
+ "bos_token_id": tokenizer.bos_token_id,
160
+ "pad_token_id": tokenizer.bos_token_id,
161
+ "max_new_tokens": 512,
162
+ **kwargs,
163
+ }
164
+
165
+ with torch.no_grad():
166
+ output_ids = self.text_model.generate(
167
+ inputs_embeds=inputs_embeds,
168
+ attention_mask=attention_mask,
169
+ **generate_config,
170
+ )
171
+
172
+ return [
173
+ x.strip()
174
+ for x in tokenizer.batch_decode(output_ids, skip_special_tokens=True)
175
+ ]
vision_encoder.py ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Union
2
+
3
+ import PIL.Image
4
+ import torch
5
+ import torch.nn.functional as F
6
+ from torch import nn
7
+ from einops import rearrange
8
+ import PIL
9
+ from torchvision.transforms.v2 import (
10
+ Compose,
11
+ Resize,
12
+ InterpolationMode,
13
+ ToImage,
14
+ ToDtype,
15
+ Normalize,
16
+ )
17
+ from transformers.utils import is_flash_attn_2_available
18
+
19
+ try:
20
+ if is_flash_attn_2_available():
21
+ from flash_attn.modules.mha import FlashSelfAttention
22
+ else:
23
+ FlashSelfAttention = None
24
+ except ImportError:
25
+ FlashSelfAttention = None
26
+
27
+
28
+ class Attention(nn.Module):
29
+
30
+ def __init__(self, dim, num_heads=16, use_flash_attn=False):
31
+ super().__init__()
32
+ assert dim % num_heads == 0, "dim should be divisible by num_heads"
33
+
34
+ self.num_heads = num_heads
35
+ self.head_dim = dim // num_heads
36
+
37
+ self.qkv = nn.Linear(dim, dim * 3)
38
+ self.proj = nn.Linear(dim, dim)
39
+
40
+ if use_flash_attn and FlashSelfAttention is not None:
41
+ self.flash_attn = FlashSelfAttention()
42
+ else:
43
+ self.flash_attn = None
44
+
45
+ torch.nn.init.kaiming_normal_(
46
+ self.qkv.weight, mode="fan_in", nonlinearity="relu"
47
+ )
48
+ torch.nn.init.kaiming_normal_(
49
+ self.proj.weight, mode="fan_in", nonlinearity="relu"
50
+ )
51
+
52
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
53
+ if self.flash_attn is not None:
54
+ qkv = self.qkv(x)
55
+ qkv = rearrange(
56
+ qkv, "... (three h d) -> ... three h d", three=3, h=self.num_heads
57
+ )
58
+ attn_output = self.flash_attn(qkv)
59
+ output = rearrange(attn_output, "... h d -> ... (h d)")
60
+ output = self.proj(output)
61
+ return output
62
+ else:
63
+ B, N, C = x.shape
64
+ qkv = (
65
+ self.qkv(x)
66
+ .reshape(B, N, 3, self.num_heads, self.head_dim)
67
+ .permute(2, 0, 3, 1, 4)
68
+ )
69
+ q, k, v = qkv.unbind(0)
70
+
71
+ x = F.scaled_dot_product_attention(q, k, v)
72
+
73
+ x = x.transpose(1, 2).reshape(B, N, C)
74
+ x = self.proj(x)
75
+ return x
76
+
77
+
78
+ class VitBlock(nn.Module):
79
+
80
+ def __init__(self, embed_dim, use_flash_attn=False):
81
+ super().__init__()
82
+ self.attn = Attention(embed_dim, use_flash_attn=use_flash_attn)
83
+ self.mlp = MLP(embed_dim, 4304)
84
+ self.norm1 = nn.LayerNorm(embed_dim)
85
+ self.norm2 = nn.LayerNorm(embed_dim)
86
+
87
+ def forward(self, x):
88
+ x = x + self.attn(self.norm1(x))
89
+ x = x + self.mlp(self.norm2(x))
90
+ return x
91
+
92
+
93
+ class VisionTransformer(nn.Module):
94
+
95
+ def __init__(self, use_flash_attn=False):
96
+ super().__init__()
97
+
98
+ embed_len = 729
99
+ embed_dim = 1152
100
+
101
+ self.patch_embed = LinearPatchEmbedding()
102
+ self.pos_embed = nn.Parameter(torch.randn(1, embed_len, embed_dim) * 0.02)
103
+ self.blocks = nn.Sequential(
104
+ *[VitBlock(embed_dim, use_flash_attn=use_flash_attn) for _ in range(27)]
105
+ )
106
+ self.norm = nn.LayerNorm(embed_dim)
107
+
108
+ def forward(self, x):
109
+ x = self.patch_embed(x)
110
+ x = x + self.pos_embed
111
+ for block in self.blocks:
112
+ x = block(x)
113
+ return self.norm(x)
114
+
115
+
116
+ class EncoderWrapper(nn.Module):
117
+
118
+ def __init__(self, use_flash_attn=False):
119
+ super().__init__()
120
+ self.model = nn.ModuleDict({"visual": VisionTransformer(use_flash_attn)})
121
+
122
+ def forward(self, x):
123
+ return self.model["visual"](x)
124
+
125
+
126
+ class LinearPatchEmbedding(nn.Module):
127
+
128
+ def __init__(self):
129
+ super().__init__()
130
+ self.linear = nn.Linear(588, 1152)
131
+
132
+ def forward(self, x):
133
+ b, c, hp1, wp2 = x.shape
134
+ p1, p2 = 14, 14
135
+ h, w = hp1 // p1, wp2 // p2
136
+ x = x.reshape(b, c, h, p1, w, p2)
137
+ x = x.permute(0, 2, 4, 1, 3, 5)
138
+ x = x.reshape(b, h * w, c * p1 * p2)
139
+
140
+ return self.linear(x)
141
+
142
+
143
+ class MLP(nn.Module):
144
+ def __init__(
145
+ self,
146
+ in_features: int,
147
+ hidden_features: int = None,
148
+ out_features: int = None,
149
+ ) -> None:
150
+ super().__init__()
151
+ out_features = out_features or in_features
152
+ hidden_features = hidden_features or in_features
153
+ self.fc1 = nn.Linear(in_features, hidden_features)
154
+ self.act = nn.GELU(approximate="tanh")
155
+ self.fc2 = nn.Linear(hidden_features, out_features)
156
+
157
+ torch.nn.init.kaiming_normal_(
158
+ self.fc1.weight, mode="fan_in", nonlinearity="relu"
159
+ )
160
+ torch.nn.init.kaiming_normal_(
161
+ self.fc2.weight, mode="fan_in", nonlinearity="relu"
162
+ )
163
+
164
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
165
+ x = self.fc1(x)
166
+ x = self.act(x)
167
+ x = self.fc2(x)
168
+ return x
169
+
170
+
171
+ class VisionProjection(nn.Module):
172
+ def __init__(self):
173
+ super().__init__()
174
+
175
+ image_embedding_dim = 1152
176
+ model_dim = 2048
177
+ hidden_dim = model_dim * 4
178
+
179
+ self.mlp = MLP(image_embedding_dim * 2, hidden_dim, model_dim)
180
+
181
+ @property
182
+ def device(self):
183
+ return self.mlp.fc1.weight.device
184
+
185
+ def forward(self, x):
186
+ return self.mlp(x)
187
+
188
+
189
+ def create_patches(image, patch_size=(378, 378)):
190
+ assert image.dim() == 3, "Image must be in CHW format"
191
+
192
+ _, height, width = image.shape # Channels, Height, Width
193
+ patch_height, patch_width = patch_size
194
+
195
+ if height == patch_height and width == patch_width:
196
+ return []
197
+
198
+ # Iterate over the image and create patches
199
+ patches = []
200
+ for i in range(0, height, patch_height):
201
+ row_patches = []
202
+ for j in range(0, width, patch_width):
203
+ patch = image[:, i : i + patch_height, j : j + patch_width]
204
+ row_patches.append(patch)
205
+ patches.append(torch.stack(row_patches))
206
+ return patches
207
+
208
+
209
+ class VisionEncoder(nn.Module):
210
+
211
+ def __init__(self, use_flash_attn=False):
212
+ super().__init__()
213
+
214
+ self.encoder = EncoderWrapper(use_flash_attn)
215
+ self.projection = VisionProjection()
216
+ self.supported_sizes = [(378, 378), (378, 756), (756, 378), (756, 756)]
217
+
218
+ @property
219
+ def device(self):
220
+ return self.projection.mlp.fc1.weight.device
221
+
222
+ @property
223
+ def dtype(self):
224
+ return self.projection.mlp.fc1.weight.dtype
225
+
226
+ def preprocess(self, image: PIL.Image.Image):
227
+ width, height = image.size
228
+ max_dim = max(width, height)
229
+ if max_dim < 512:
230
+ im_size = (378, 378)
231
+ else:
232
+ aspect_ratio = width / height
233
+ im_size = min(
234
+ self.supported_sizes,
235
+ key=lambda size: (
236
+ abs((size[1] / size[0]) - aspect_ratio),
237
+ abs(size[0] - width) + abs(size[1] - height),
238
+ ),
239
+ )
240
+
241
+ return Compose(
242
+ [
243
+ Resize(size=im_size, interpolation=InterpolationMode.BICUBIC),
244
+ ToImage(),
245
+ ToDtype(torch.float32, scale=True),
246
+ Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
247
+ ]
248
+ )(image)
249
+
250
+ def forward(
251
+ self, images: Union[PIL.Image.Image, list[PIL.Image.Image], torch.Tensor]
252
+ ) -> torch.Tensor:
253
+ im_list = None
254
+ if isinstance(images, torch.Tensor):
255
+ # Input must have dimensions (B, C, H, W)
256
+ assert (
257
+ len(images.shape) == 4
258
+ ), "Tensor input must have dimensions (B, C, H, W)"
259
+ im_list = list(images)
260
+ elif isinstance(images, PIL.Image.Image):
261
+ im_list = [images]
262
+ elif isinstance(images, list):
263
+ im_list = images
264
+ else:
265
+ raise ValueError(
266
+ "Input must be a PIL image, list of PIL images, or a tensor"
267
+ )
268
+
269
+ # Preprocess unless the images are already tensors (indicating that
270
+ # they have already been preprocessed)
271
+ if not isinstance(im_list[0], torch.Tensor):
272
+ im_list = [self.preprocess(im.convert("RGB")) for im in im_list]
273
+
274
+ patches = [create_patches(im) for im in im_list]
275
+ flat_patches = [patch for image_patches in patches for patch in image_patches]
276
+
277
+ # Images may be variable size, and need to be resized to a common size after
278
+ # creating patches.
279
+ resized_images = [
280
+ F.interpolate(im.unsqueeze(0), size=(378, 378), mode="bilinear")
281
+ for im in im_list
282
+ ]
283
+
284
+ combined_images = torch.cat([*resized_images, *flat_patches], dim=0)
285
+ combined_images = combined_images.to(self.device, dtype=self.dtype)
286
+
287
+ combined_features = self.encoder(combined_images)
288
+
289
+ full_img_features = combined_features[: len(im_list)]
290
+ patch_features = (
291
+ combined_features[len(im_list) :].transpose(1, 2).view(-1, 1152, 27, 27)
292
+ )
293
+
294
+ # Reshape patch features back to their original structure
295
+ reshaped_patch_features = []
296
+ patch_idx = 0
297
+ for i, patch_set in enumerate(patches):
298
+ if len(patch_set) == 0:
299
+ reshaped_patch_features.append(
300
+ full_img_features[i].transpose(0, 1).view(1152, 27, 27)
301
+ )
302
+ else:
303
+ sample_features = []
304
+ for row_patches in patch_set:
305
+ row_len = len(row_patches)
306
+ row_features = patch_features[
307
+ patch_idx : patch_idx + row_len
308
+ ] # row_len, T, C
309
+ row_features = torch.cat(
310
+ list(row_features), dim=2
311
+ ) # T, C * row_len
312
+ patch_idx += row_len
313
+ sample_features.append(row_features)
314
+ sample_features = torch.cat(sample_features, dim=1)
315
+ sample_features = F.interpolate(
316
+ sample_features.unsqueeze(0), size=(27, 27), mode="bilinear"
317
+ ).squeeze(0)
318
+ reshaped_patch_features.append(sample_features)
319
+ reshaped_patch_features = (
320
+ torch.stack(reshaped_patch_features).view(-1, 1152, 729).transpose(1, 2)
321
+ )
322
+
323
+ final_features = torch.cat([full_img_features, reshaped_patch_features], dim=2)
324
+
325
+ return self.projection(final_features)