Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,127 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- uz
|
5 |
+
- en
|
6 |
+
base_model:
|
7 |
+
- mistralai/Mistral-Nemo-Instruct-2407
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- text-generation-inference
|
11 |
+
- summarization
|
12 |
+
- translation
|
13 |
+
- question-answering
|
14 |
+
datasets:
|
15 |
+
- tahrirchi/uz-crawl
|
16 |
+
- allenai/c4
|
17 |
+
- yahma/alpaca-cleaned
|
18 |
+
- MLDataScientist/Wikipedia-uzbek-2024-05-01
|
19 |
+
metrics:
|
20 |
+
- bleu
|
21 |
+
- comet
|
22 |
+
- accuracy
|
23 |
+
pipeline_tag: text-generation
|
24 |
+
---
|
25 |
+
|
26 |
+
### Model Description
|
27 |
+
|
28 |
+
The Mistral-Nemo-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and self-constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
|
29 |
+
For more details on the model's construction and performance metrics, see [this post.](#)
|
30 |
+
|
31 |
+
- **Developed by:**
|
32 |
+
- [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
|
33 |
+
- [Azimjon Urinov](https://azimjonn.github.io/)
|
34 |
+
- [Khurshid Juraev](https://kjuraev.com/)
|
35 |
+
-
|
36 |
+
## Usage
|
37 |
+
|
38 |
+
The model can be used with three different frameworks
|
39 |
+
|
40 |
+
- [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#mistral-inference)
|
41 |
+
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
|
42 |
+
- [`NeMo`](https://github.com/NVIDIA/NeMo): See [nvidia/Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct)
|
43 |
+
|
44 |
+
### Mistral Inference
|
45 |
+
|
46 |
+
#### Install
|
47 |
+
|
48 |
+
It is recommended to use `behbudiy/Mistral-Nemo-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
|
49 |
+
|
50 |
+
```
|
51 |
+
pip install mistral_inference
|
52 |
+
```
|
53 |
+
|
54 |
+
#### Download
|
55 |
+
|
56 |
+
```py
|
57 |
+
from huggingface_hub import snapshot_download
|
58 |
+
from pathlib import Path
|
59 |
+
|
60 |
+
mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct-Uz')
|
61 |
+
mistral_models_path.mkdir(parents=True, exist_ok=True)
|
62 |
+
|
63 |
+
snapshot_download(repo_id="behbudiy/Mistral-Nemo-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
|
64 |
+
```
|
65 |
+
|
66 |
+
#### Chat
|
67 |
+
|
68 |
+
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
|
69 |
+
|
70 |
+
```
|
71 |
+
mistral-chat $HOME/mistral_models/Nemo-Instruct-Uz --instruct --max_tokens 256 --temperature 0.35
|
72 |
+
```
|
73 |
+
|
74 |
+
*E.g.* Try out something like:
|
75 |
+
```
|
76 |
+
How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
|
77 |
+
```
|
78 |
+
|
79 |
+
#### Instruct following
|
80 |
+
|
81 |
+
```py
|
82 |
+
from mistral_inference.transformer import Transformer
|
83 |
+
from mistral_inference.generate import generate
|
84 |
+
|
85 |
+
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
|
86 |
+
from mistral_common.protocol.instruct.messages import UserMessage
|
87 |
+
from mistral_common.protocol.instruct.request import ChatCompletionRequest
|
88 |
+
|
89 |
+
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
|
90 |
+
model = Transformer.from_folder(mistral_models_path)
|
91 |
+
|
92 |
+
prompt = "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
|
93 |
+
|
94 |
+
completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])
|
95 |
+
|
96 |
+
tokens = tokenizer.encode_chat_completion(completion_request).tokens
|
97 |
+
|
98 |
+
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
|
99 |
+
result = tokenizer.decode(out_tokens[0])
|
100 |
+
|
101 |
+
print(result)
|
102 |
+
```
|
103 |
+
|
104 |
+
### Transformers
|
105 |
+
|
106 |
+
> [!IMPORTANT]
|
107 |
+
> NOTE: Until a new release has been made, you need to install transformers from source:
|
108 |
+
> ```sh
|
109 |
+
> pip install git+https://github.com/huggingface/transformers.git
|
110 |
+
> ```
|
111 |
+
|
112 |
+
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
|
113 |
+
|
114 |
+
```py
|
115 |
+
from transformers import pipeline
|
116 |
+
|
117 |
+
messages = [
|
118 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
119 |
+
{"role": "user", "content": "Who are you?"},
|
120 |
+
]
|
121 |
+
chatbot = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-2407",max_new_tokens=128)
|
122 |
+
chatbot(messages)
|
123 |
+
```
|
124 |
+
|
125 |
+
## More
|
126 |
+
For more details and examples, refer to the base model below:
|
127 |
+
https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407
|