File size: 6,050 Bytes
6ec1d9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c151b52
 
 
6ec1d9f
 
 
 
 
 
 
 
 
843bd12
95b5d85
6ec1d9f
 
 
 
 
386c246
6ec1d9f
 
c6f9fe1
6ec1d9f
339da93
6ec1d9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dad2d23
6ec1d9f
 
ed1adb7
6ec1d9f
 
 
 
 
 
 
 
 
 
 
 
dad2d23
6ec1d9f
 
 
 
 
 
 
 
 
def6c60
5d8f005
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6ec1d9f
 
5d8f005
 
6ec1d9f
5d8f005
 
6ec1d9f
5d8f005
 
6ec1d9f
5d8f005
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6ec1d9f
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
license: apache-2.0
language:
- uz
- en
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
library_name: transformers
tags:
- text-generation-inference
- summarization
- translation
- question-answering
datasets:
- tahrirchi/uz-crawl
- allenai/c4
- MLDataScientist/Wikipedia-uzbek-2024-05-01
- yahma/alpaca-cleaned
- behbudiy/alpaca-cleaned-uz
- behbudiy/translation-instruction
metrics:
- bleu
- comet
- accuracy
pipeline_tag: text-generation
---

### Model Description

The Mistral-Nemo-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
For details regarding the performance metrics compared to the base model, see [this post.](https://www.linkedin.com/feed/update/urn:li:activity:7241389815559008256/)

- **Developed by:**
  - [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
  - [Azimjon Urinov](https://azimjonn.github.io/)
  - [Khurshid Juraev](https://kjuraev.com/)

## Usage

The model can be used with frameworks:

- [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/behbudiy/Mistral-Nemo-Instruct-Uz#mistral-inference)

### Mistral Inference

#### Install

It is recommended to use `behbudiy/Mistral-Nemo-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.

```
pip install mistral_inference
```

#### Download

```py
from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct-Uz')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="behbudiy/Mistral-Nemo-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```

#### Chat

After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using

```
mistral-chat $HOME/mistral_models/Nemo-Instruct-Uz --instruct --max_tokens 256 --temperature 0.35
```

*E.g.* Try out something like:
```
O'zbek tilida kichik bir she'r yozib bera olasanmi?
```

#### Instruction Following

```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)

prompt = "O'zbek tilida kichik bir she'r yozib bera olasanmi?"

completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])

print(result)
```

## Information on Evaluation Method

To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets, where we merged the dev and test sets to create a bigger evaluation data for each Uz-En and En-Uz subsets.
We used the following prompt to do one-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").

```python
  prompt = f'''You are a professional Uzbek-English translator. Your task is to accurately translate the given Uzbek text into English.

  Instructions:
  1. Translate the text from Uzbek to English.
  2. Maintain the original meaning and tone.
  3. Use appropriate English grammar and vocabulary.
  4. If you encounter an ambiguous or unfamiliar word, provide the most likely translation based on context.
  5. Output only the English translation, without any additional comments.

  Example:
  Uzbek: "Bugun ob-havo juda yaxshi, quyosh charaqlab turibdi."
  English: "The weather is very nice today, the sun is shining brightly."

  Now, please translate the following Uzbek text into English:
  "{sentence}"
    '''
```

To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset, for which we created binary labels (0: Negative, 1: Positive) using GPT-4o API (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
We used the following prompt for the evaluation:

```python
prompt = f'''Given the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation.

Text: {text}"
'''
```
For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:

```python
prompt = f'''Classify the given Uzbek news article into one of the following categories. Provide only the category number as the answer.

Categories:
0 - Politics (Siyosat)
1 - Economy (Iqtisodiyot)
2 - Technology (Texnologiya)
3 - Sports (Sport)
4 - Culture (Madaniyat)
5 - Health (Salomatlik)
6 - Family and Society (Oila va Jamiyat)
7 - Education (Ta'lim)
8 - Ecology (Ekologiya)
9 - Foreign News (Xorijiy Yangiliklar)

Now classify this article:
"{text}"

Answer (number only):"
'''
```

## MMLU
We used [this script](https://github.com/FranxYao/chain-of-thought-hub/blob/461e2d551f3f12d54caee75fa1e915fdbc3e9d12/MMLU/run_mmlu_open_source.py).

## More
For more details and examples, refer to the base model below:
https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407