--- language: - ko - en license: other library_name: transformers tags: - korean - gemma - pytorch license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms pipeline_tag: text-generation base_model: openchat/openchat-3.5-0106-gemma --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6332f1a52b866de639ee0279/XXemQnrO181w0-v59NADb.jpeg) # Gemma Ko 7B Instruct v0.62 - Eval Loss: `1.2946` - Train Loss: `1.1717` - lr: `2e-5` - optimizer: adamw - lr_scheduler_type: cosine ## Model Details ### Model Description The Gemma Ko 7B Instruct v0.62 model is designed for generating human-like text in the Korean language. It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation. This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation. - **Developed by:** `lemon-mint` - **Model type:** Gemma - **Language(s) (NLP):** Korean, English - **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms) - **Finetuned from model:** [openchat/openchat-3.5-0106-gemma](https://huggingface.co/openchat/openchat-3.5-0106-gemma) # Limitations and Ethical Considerations As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.