--- license: apache-2.0 --- ## Overview The [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) project aims to pretrain a 1.1B Llama model on 3 trillion tokens. This is the chat model finetuned on a diverse range of synthetic dialogues generated by ChatGPT. ## Variants | No | Variant | Cortex CLI command | | --- | --- | --- | | 1 | [1b-gguf](https://huggingface.co/cortexhub/tinyllama/tree/1b-gguf) | `cortex run tinyllama:1b-gguf` | ## Use it with Jan (UI) 1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) 2. Use in Jan model Hub: ``` cortexhub/tinyllama ``` ## Use it with Cortex (CLI) 1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) 2. Run the model with command: ``` cortex run tinyllama ``` ## Credits - **Author:** Microsoft - **Converter:** [Homebrew](https://www.homebrew.ltd/) - **Original License:** [License](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md) - **Papers:** [Tinyllama Paper](https://arxiv.org/abs/2401.02385)