Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,6 @@
|
|
|
|
|
|
|
|
1 |
# Model Card for Mistral-7B-Instruct-v0.1-8bit
|
2 |
|
3 |
The Mistral-7B-Instruct-v0.1-8bit is a 8bit quantize version with torch_dtype=torch.float16, I just load in 8bit and push here [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
|
@@ -25,7 +28,7 @@ To use it:
|
|
25 |
```python
|
26 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
27 |
|
28 |
-
tok_name = "mistralai/Mistral-7B-Instruct-v0.
|
29 |
model_name = "LsTam/Mistral-7B-Instruct-v0.1-8bit"
|
30 |
|
31 |
tokenizer = AutoTokenizer.from_pretrained(tok_name)
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
# Model Card for Mistral-7B-Instruct-v0.1-8bit
|
5 |
|
6 |
The Mistral-7B-Instruct-v0.1-8bit is a 8bit quantize version with torch_dtype=torch.float16, I just load in 8bit and push here [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
|
|
|
28 |
```python
|
29 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
30 |
|
31 |
+
tok_name = "mistralai/Mistral-7B-Instruct-v0.1"
|
32 |
model_name = "LsTam/Mistral-7B-Instruct-v0.1-8bit"
|
33 |
|
34 |
tokenizer = AutoTokenizer.from_pretrained(tok_name)
|