Update README.md
Browse files
README.md
CHANGED
@@ -143,8 +143,9 @@ For instruction training, we first trained the model with Supervised Fine-tuning
|
|
143 |
</table>
|
144 |
|
145 |
## Interfacing with the Instruct Model
|
146 |
-
Model weights were converted
|
147 |
-
|
|
|
148 |
|
149 |
> [!IMPORTANT]
|
150 |
> To ensure optimal performance, please use the following template when interacting with the model:
|
|
|
143 |
</table>
|
144 |
|
145 |
## Interfacing with the Instruct Model
|
146 |
+
Model weights were converted from the original Mamba2 implementation to be Hugging Face compatible.
|
147 |
+
Due to the lack of official support for Mamba2 attention layers in Hugging Face Transformers, custom modeling files are included.
|
148 |
+
The attention layer implementation is based on the work from Pull Request #32027 in the Hugging Face Transformers repository: [https://github.com/huggingface/transformers/pull/32027](https://github.com/huggingface/transformers/pull/32027)
|
149 |
|
150 |
> [!IMPORTANT]
|
151 |
> To ensure optimal performance, please use the following template when interacting with the model:
|