File size: 3,536 Bytes
c0f65ff
 
f05722b
 
 
 
 
 
 
 
 
 
 
 
 
c47413e
 
f05722b
 
007d954
 
 
 
 
f05722b
 
 
 
 
 
 
 
 
 
 
 
 
ccc33ca
 
 
 
f05722b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3aabf18
 
 
 
 
 
 
 
 
f05722b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: gemma
base_model: google/gemma-2-2b
tags:
- generated_from_trainer
datasets:
- cognitivecomputations/Dolphin-2.9
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee/agent_data
- PawanKrd/math-gpt-4o-200k
- cognitivecomputations/SystemChat-2.0
---

# Dolphin 2.9.4 Gemma2 2b 🐬

Curated and trained by Eric Hartford and Cognitive Computations.

This one is special because I used [GrokAdamW](https://github.com/cognitivecomputations/grokadamw) and [Liger Kernel](https://github.com/linkedin/Liger-Kernel) 

GrokAdamW is intended to enable fast Grokking, to increase generalization.  (I am not certain this occurred because this checkpoint is 4 epochs, and it probabaly take more epochs to achieve grok.)

[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/h3K4XGj2RH)
Discord: https://discord.gg/h3K4XGj2RH

<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />

Our appreciation for the sponsors of Dolphin 2.9.4:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node

This model is based on Google Gemma2 2b, and is governed by the Gemma license.

The base model has 128K context, and our finetuning used 8192 sequence length.

`ollama run CognitiveComputations/dolphin-gemma2:2b`

https://ollama.com/CognitiveComputations/dolphin-gemma2

Dolphin 2.9.4 uses ChatML prompt template format.

example:

```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

```

Dolphin-2.9.4 has a variety of instruction following, conversational, and coding skills. It also has agentic abilities and supports function calling.
It is especially trained to obey the system prompt, and follow instructions in many languages.

Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/Uziw8fOBZg2x9gyKZwlck.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KSsiwmFWC7SePk271u_aP.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ftgTEvl9-XwBceArROreD.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/UwtfaWmAiJZV0qD1mZTrU.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/255wrP32YuFzw6CfxAGJx.png)

<details><summary>Evals</summary>


</details>

[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)


## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed