Suparious's picture
Updated base_model tag in README.md
7539cd6 verified
metadata
license: other
datasets:
  - mlabonne/orpo-dpo-mix-40k
  - Open-Orca/SlimOrca-Dedup
  - jondurbin/airoboros-3.2
  - microsoft/orca-math-word-problems-200k
  - m-a-p/Code-Feedback
  - MaziyarPanahi/WizardLM_evol_instruct_V2_196k
base_model: Locutusque/llama-3-neural-chat-v1-8b
library_name: transformers
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
pipeline_tag: text-generation
inference: false
quantized_by: Suparious

Locutusque/llama-3-neural-chat-v1-8b AWQ

image/png

Model Summary

I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO.

This model has great performance in writing and coding.

Training Data

  • Open-Orca/SlimOrca-Dedup
  • jondurbin/airoboros-3.2
  • microsoft/orca-math-word-problems-200k
  • m-a-p/Code-Feedback
  • MaziyarPanahi/WizardLM_evol_instruct_V2_196k
  • mlabonne/orpo-dpo-mix-40k