--- license: apache-2.0 language: - uz - en base_model: - mistralai/Mistral-Nemo-Instruct-2407 library_name: transformers tags: - text-generation-inference - summarization - translation - question-answering datasets: - tahrirchi/uz-crawl - allenai/c4 - yahma/alpaca-cleaned - MLDataScientist/Wikipedia-uzbek-2024-05-01 metrics: - bleu - comet - accuracy pipeline_tag: text-generation --- ### Model Description The Mistral-Nemo-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and self-constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications. For more details on the model's construction and performance metrics, see [this post.](#) - **Developed by:** - [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/) - [Azimjon Urinov](https://azimjonn.github.io/) - [Khurshid Juraev](https://kjuraev.com/) ## Usage The model can be used with three different frameworks - [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/behbudiy/Mistral-Nemo-Instruct-Uz#mistral-inference) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) ### Mistral Inference #### Install It is recommended to use `behbudiy/Mistral-Nemo-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` #### Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct-Uz') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="behbudiy/Mistral-Nemo-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path) ``` #### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using ``` mistral-chat $HOME/mistral_models/Nemo-Instruct-Uz --instruct --max_tokens 256 --temperature 0.35 ``` *E.g.* Try out something like: ``` O'zbek tilida kichik bir she'r yozib bera olasanmi? ``` #### Instruct following ```py from mistral_inference.transformer import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json") model = Transformer.from_folder(mistral_models_path) prompt = "O'zbek tilida kichik bir she'r yozib bera olasanmi?" completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)]) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) print(result) ``` ### Transformers If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import pipeline messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-2407",max_new_tokens=128) chatbot(messages) ``` ## More For more details and examples, refer to the base model below: https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407