Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ For Ollama, it required to be a GGUF file. Once you have this it is pretty strai
|
|
37 |
|
38 |
Quick Start:
|
39 |
- You must already have Ollama running in your setting
|
40 |
-
- Download the unsloth.
|
41 |
- In the same directory create a file call "Modelfile"
|
42 |
- Inside the "Modelfile" type
|
43 |
|
@@ -140,4 +140,40 @@ dataset = dataset.map(formatting_prompts_func, batched = True,)
|
|
140 |
|
141 |
```
|
142 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
Will be updating this periodically.. as I have limited colab resources..
|
|
|
37 |
|
38 |
Quick Start:
|
39 |
- You must already have Ollama running in your setting
|
40 |
+
- Download the unsloth.Q2_K.gguf model from Files
|
41 |
- In the same directory create a file call "Modelfile"
|
42 |
- Inside the "Modelfile" type
|
43 |
|
|
|
140 |
|
141 |
```
|
142 |
|
143 |
+
# SIDENOTE :
|
144 |
+
|
145 |
+
Because the Fine Tuned data way that way, you could technically try this and make it work..(?) still testing but please feel free to try as well (and do let me know)
|
146 |
+
|
147 |
+
```python
|
148 |
+
FROM ./GEMMA2_unsloth.Q2_K.gguf
|
149 |
+
|
150 |
+
PARAMETER stop ["<|STOP|>"]
|
151 |
+
|
152 |
+
TEMPLATE """<|STOP|><|BEGIN_QUERY|>
|
153 |
+
{{.Prompt}}
|
154 |
+
<|END_QUERY|>
|
155 |
+
<|BEGIN_ANALYSIS|>
|
156 |
+
|
157 |
+
<|END_ANALYSIS|>
|
158 |
+
<|BEGIN_RESPONSE|>
|
159 |
+
|
160 |
+
<|END_RESPONSE|>
|
161 |
+
<|BEGIN_CLASSIFICATION|>
|
162 |
+
|
163 |
+
<|END_CLASSIFICATION|>
|
164 |
+
<|BEGIN_SENTIMENT|>
|
165 |
+
|
166 |
+
<|END_SENTIMENT|>
|
167 |
+
<|STOP|>"""
|
168 |
+
|
169 |
+
SYSTEM """You are an AI assistant trained to provide comprehensive and engaging responses. Follow this structure in your replies:
|
170 |
+
1. Begin with a brief analysis of the query.
|
171 |
+
2. Provide a detailed response, using an enthusiastic and friendly tone. Include specific examples where appropriate.
|
172 |
+
3. Add relevant classification keywords.
|
173 |
+
4. Conclude with a sentiment analysis of your response.
|
174 |
+
Use the BEGIN_ and END_ tokens to clearly delineate each section of your response."""
|
175 |
+
|
176 |
+
```
|
177 |
+
|
178 |
+
|
179 |
Will be updating this periodically.. as I have limited colab resources..
|