Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,12 @@ datasets:
|
|
24 |
|
25 |
This an experimental LASER version of NeuralHermes using [laserRMT](https://github.com/cognitivecomputations/laserRMT).
|
26 |
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
NeuralHermes is an [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on several benchmarks (see results).
|
30 |
|
|
|
24 |
|
25 |
This an experimental LASER version of NeuralHermes using [laserRMT](https://github.com/cognitivecomputations/laserRMT).
|
26 |
|
27 |
+
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|
28 |
+
|------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|
29 |
+
|[NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)| 43.54| 73.44| 55.26| 42.24| 53.62|
|
30 |
+
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) | 43.67| 73.24| 55.37| 41.76| 53.51|
|
31 |
+
|
32 |
+
Fernando Fernandes Neto and Eric Hartford. "Optimizing Large Language Models Using Layer-Selective Rank Reduction and Random Matrix Theory." 2024.
|
33 |
|
34 |
NeuralHermes is an [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on several benchmarks (see results).
|
35 |
|