Edit model card

Model description

This is a Yi-34B-200K XLCTX model treated with DPO with adamo1139/rawrr_v2-2_stage1 and then SFT on adamo1139/AEZAKMI_v3-7. It does work but it does have quite a lot of assistant feel to it. I am uploading full model since I want to compare it to model after ORPO on the Open LLM Leaderboard, but I would suggest using the version that underwent ORPO training on adamo1139/toxic-dpo-natural-v5 instead, as it's just more pleasant to talk to in my opinion.

Downloads last month
3
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train adamo1139/Yi-34B-200K-XLCTX-AEZAKMI-RAW-2904

Collection including adamo1139/Yi-34B-200K-XLCTX-AEZAKMI-RAW-2904