File size: 1,901 Bytes
c1157b9
4580703
c1157b9
 
 
 
4580703
c1157b9
3a73f4d
c1157b9
3a73f4d
 
 
c1157b9
4580703
c1157b9
4580703
c1157b9
 
 
4580703
c1157b9
3a73f4d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: apache-2.0
library_name: transformers
tags: []
---

# Nox

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64241c3d774cc340797429fc/u60xmpVM1K0AtPXZtw8b4.jpeg)

The nox project is a set of tools that make it easy to use various fine tuning technologies using solar models.
We constructed ko data using grammatically accurate data.(It's not perfect, but I tried my best.) 
And we created nox-solar model using a fine-tuning technique(sft,dpo) Our model, the nox-solar model, ranked first on the [Open Ko-LLM Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).

Currently, we are planning to make all code and datasets public. 

Through this, users are expected to be able to freely conduct research and development using Nox. 



## Model Details

* **Model Developers** :  davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/nox(will be updated soon.)
* **base mode** : Edentns/DataVortexS-10.7B-dpo-v1.11
* **sft dataset** : komt-124k(will be updated soon.)
* **dpo dataset** : comparison_v2_289k(will be updated soon.)
* **evalution** : [kollm_evalution](https://github.com/davidkim205/kollm_evaluation)
* **evalution dataset** : [open-ko-llm-leaderboard datasets](https://huggingface.co/collections/davidkim205/open-ko-llm-leaderboard-datasets-65eea9e87fc3ae80787ee15a)

## Evaluation
### [The Open Ko-LLM Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
| Model                          | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------------------------------ | ------- | ------ | ------------ | ------- | ------------- | --------------- |
| davidkim205/nox-solar-10.7b-v4 | 67.77   | 73.55  | 72.07        | 57.93   | 79.32         | 55.96           |



### [kollm_evalution](https://github.com/davidkim205/kollm_evalution)
(will be updated soon.)