Edit model card

repo_clone_081924

name: suzume-llama-3-8B-multilingual-orpo-borda-top25
license: cc-by-nc-4.0
tags:
- lightblue
- multilingual
- text-generation
- text2text-generation
- natural language
- translate
- orpo
- Meta
- Llama
- RichardErkhov
type:
- 6GB
- 8GB
- llm
- chat
- multilingual
- subsume
- llama-3
config: 
- ctx=8192
- 5bit
- temp=0
resolutions: 
datasets:
- lightblue/mitsu_full_borda
- lightblue/tagengo-gpt4
- megagonlabs/instruction_ja
- openchat/openchat_sharegpt4_dataset
language: 
- zh
- fr
- de
- jp
- ru
- en
size:
- 4920734016
- 5732987200
use: 
shortcomings: 
sources: 
- https://arxiv.org/abs/2405.12612
- https://arxiv.org/abs/2405.18952
funded_by: 
train_hardware:  4 x A100 (80GB)
pipeline_tag: text-generation 
examples: "Bonjour!"
Downloads last month
151
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Datasets used to train darkshapes/suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf

Spaces using darkshapes/suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf 3