Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,7 @@ language:
|
|
9 |
|
10 |
# LHK_DPO_v1
|
11 |
- Original model: [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
|
|
|
12 |
LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
|
13 |
|
14 |
## Details
|
|
|
9 |
|
10 |
# LHK_DPO_v1
|
11 |
- Original model: [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
|
12 |
+
|
13 |
LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
|
14 |
|
15 |
## Details
|