Update README.md
Browse files
README.md
CHANGED
@@ -145,7 +145,7 @@ For instruction training, we first trained the model with Supervised Fine-tuning
|
|
145 |
## Interfacing with the Instruct Model
|
146 |
Model weights were converted from the original Mamba2 implementation to be Hugging Face compatible. <br>
|
147 |
Due to the lack of official support for Mamba2 attention layers in Hugging Face Transformers, custom modeling files are included. <br>
|
148 |
-
The attention layer implementation
|
149 |
|
150 |
> [!IMPORTANT]
|
151 |
> To ensure optimal performance, please use the following template when interacting with the model:
|
|
|
145 |
## Interfacing with the Instruct Model
|
146 |
Model weights were converted from the original Mamba2 implementation to be Hugging Face compatible. <br>
|
147 |
Due to the lack of official support for Mamba2 attention layers in Hugging Face Transformers, custom modeling files are included. <br>
|
148 |
+
The attention layer implementation for the modeling files are based on the work from Pull Request #32027 in the Hugging Face Transformers repository: [https://github.com/huggingface/transformers/pull/32027](https://github.com/huggingface/transformers/pull/32027)
|
149 |
|
150 |
> [!IMPORTANT]
|
151 |
> To ensure optimal performance, please use the following template when interacting with the model:
|