Update README.md
Browse files
README.md
CHANGED
@@ -7,16 +7,5 @@ language:
|
|
7 |
- en
|
8 |
---
|
9 |
|
10 |
-
#
|
11 |
-
|
12 |
-
|
13 |
-
LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
|
14 |
-
|
15 |
-
## Details
|
16 |
-
coming sooon.
|
17 |
-
|
18 |
-
## Evaluation Results
|
19 |
-
coming soon.
|
20 |
-
|
21 |
-
## Contamination Results
|
22 |
-
coming soon.
|
|
|
7 |
- en
|
8 |
---
|
9 |
|
10 |
+
# Description
|
11 |
+
This repo contains GGUF format model files for [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|