File size: 12,302 Bytes
02680d2
 
 
617e50f
 
a32d99e
 
617e50f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
02680d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c5e2612
02680d2
 
 
 
 
 
 
 
 
 
 
 
44e4764
9460396
 
9f1d774
 
 
 
 
8d61eda
0bbb5b9
8d61eda
f435768
9460396
02680d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
617e50f
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
---
language:
- en
license: cc-by-sa-4.0
library_name: transformers
base_model:
- elinas/Llama-3-15B-Instruct-zeroed
datasets:
- TheSkullery/Aether-Lite-v1.8.1
model-index:
- name: L3-Aethora-15B-V2
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: HuggingFaceH4/ifeval
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 72.08
      name: strict accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeusLabs/L3-Aethora-15B-V2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: BBH
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 28.97
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeusLabs/L3-Aethora-15B-V2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: hendrycks/competition_math
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 7.33
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeusLabs/L3-Aethora-15B-V2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 5.03
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeusLabs/L3-Aethora-15B-V2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 6.25
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeusLabs/L3-Aethora-15B-V2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 27.78
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeusLabs/L3-Aethora-15B-V2
      name: Open LLM Leaderboard
---
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>L3-Aethora-15B v2 Data Card</title>
  <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
  <style>
    body, html {
      height: 100%;
      margin: 0;
      padding: 0;
      font-family: 'Quicksand', sans-serif;
      background: linear-gradient(135deg, #0a1128 0%, #1c2541 100%);
      color: #e0e1dd;
      font-size: 16px;
    }
    .container {
      width: 100%;
      height: 100%;
      padding: 20px;
      margin: 0;
      background-color: rgba(255, 255, 255, 0.05);
      border-radius: 12px;
      box-shadow: 0 4px 10px rgba(0, 0, 0, 0.3);
      backdrop-filter: blur(10px);
      border: 1px solid rgba(255, 255, 255, 0.1);
    }
    .header h1 {
      font-size: 28px;
      color: #4cc9f0;
      margin: 0 0 20px 0;
      text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
    }
    .update-section h2 {
      font-size: 24px;
      color: #7209b7;
    }
    .update-section p {
      font-size: 16px;
      line-height: 1.6;
      color: #e0e1dd;
    }
    .info img {
      width: 100%;
      border-radius: 10px;
      margin-bottom: 15px;
    }
    a {
      color: #4cc9f0;
      text-decoration: none;
    }
    a:hover {
      color: #f72585;
    }
    .button {
      display: inline-block;
      background-color: #3a0ca3;
      color: #e0e1dd;
      padding: 10px 20px;  
      border-radius: 5px;
      cursor: pointer;
      text-decoration: none;
    }
    .button:hover {
      background-color: #7209b7;
    }
    pre {
      background-color: #1c2541;
      padding: 10px;
      border-radius: 5px;
      overflow-x: auto;
    }
    code {
      font-family: 'Courier New', monospace;
      color: #e0e1dd;
    }
  </style>
</head>
<body>
  <div class="container">
    <div class="header">
      <h1>L3-Aethora-15B v2</h1>
    </div>
    <div class="info">
      <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/yJpwVd5UTnAVDoEPVVCS1.png">
      <h2>Presented by:</h2>
      <p><strong>Creators: <a href="https://huggingface.co/ZeusLabs" target="_blank"> ZeusLabs</a> </p></strong>
        <ul>
        <li><a href="https://huggingface.co/steelskull" target="_blank">Steelskull</a></p></li>
        <li><a href="https://huggingface.co/elinas" target="_blank">Elinas</a></p></li>
      </ul>
      <p><strong>Dataset:</strong> <a href="https://huggingface.co/datasets/TheSkullery/Aether-Lite-V1.8.1" target="_blank">Theskullery/Aether-Lite-V1.8.1</a></p>
      <p><strong>Trained:</strong> 4 x A100 for 17.5 hours on 125k samples</p>
      <p><strong>Sponsored by:</strong> Garg (@g4rg)</p>
      <h2>About L3-Aethora-15B v2:</h2>
      <pre><code> L3 = Llama3 </code></pre>
      <p>L3-Aethora-15B v2 is an advanced language model built upon the Llama 3 architecture. It employs state-of-the-art training techniques and a curated dataset to deliver enhanced performance across a wide range of tasks.</p>
      <p>(Thank you all for the interest! the model has <strong>surpassed 150k downloads</strong> on all formats!)</p>
      <h4>Quants:</h4>
      <ul>
      <p>GGUF-Mix:</p>
        <li>@Mradermacher: <a href="https://huggingface.co/mradermacher/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF</a> && <a href="https://huggingface.co/mradermacher/L3-Aethora-15B-V2-i1-GGUF" target="_blank">L3-Aethora-15B-V2-Imatrix-GGUF</a></li>
        <li>@Bullerwins: <a href="https://huggingface.co/bullerwins/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF-Only</a></li>
        <li>@Bartowski: <a href="https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF-&-Imatrix-&-F16</a></li>
        <li>@Duyntnet: <a href="https://huggingface.co/duyntnet/L3-Aethora-15B-V2-imatrix-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF-&-Imatrix</a></li>
      <p>GGUF-F16: (both f16.q6 and f16.q5 are smaller than q8 and perform as well as the pure f16)</p>
        <li>@MZeroWw: <a href="https://huggingface.co/ZeroWw/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF-f16</a></li>
      <p>EXL2:</p>
        <li>@Bullerwins: <a href="https://huggingface.co/collections/bullerwins/l3-aethora-15b-v2-exl2-667d1f4c0204c59594ca79ae" target="_blank">L3-Aethora-15B-V2-EXL2</a></li>
      </ul>
      <h2>Training Process:</h2>
      <ul>
        <li>Base Model: elinas/Llama-3-15B-Instruct-zeroed</li>
        <li>Training Duration: 17.5 hours on 4 x A100 GPUs</li>
        <li>Training Method: LoRA (Low-Rank Adaptation)</li>
        <li>Epochs: 4</li>
        <li>Precision: BF16</li>
        <li>Sequence Length: 8192 tokens</li>
      </ul>
      <h2>Model Capabilities:</h2>
      <p>The goal of L3-Aethora-15B v2 is to have an expanded proficiency across a wide spectrum of tasks with a focus in creative writing:</p>
      <ul>
        <li><strong>Creative Writing and Storytelling:</strong>
          <ul>
            <li>Generates engaging narratives, poetry, and creative content</li>
            <li>Adapts writing style to various genres and tones</li>
            <li>Assists in plot development and character creation</li>
          </ul>
        </li>
        <li><strong>General Intelligence:</strong>
          <ul>
            <li>Engages in detailed discussions on medical topics and scientific concepts</li>
            <li>Explains complex scientific phenomena</li>
            <li>Assists in literature review and hypothesis generation</li>
          </ul>
        </li>
        <li><strong>Instructional and Educational Content:</strong>
          <ul>
            <li>Creates comprehensive tutorials and how-to guides</li>
            <li>Explains complex topics with clarity and appropriate depth</li>
            <li>Generates educational materials for various skill levels</li>
          </ul>
        </li>
        <li><strong>Reasoning and Problem-Solving:</strong>
          <ul>
            <li>Analyzes complex scenarios and provides logical solutions</li>
            <li>Engages in step-by-step problem-solving across various domains</li>
            <li>Offers multiple perspectives on challenging issues</li>
          </ul>
        </li>
        <li><strong>Contextual Understanding and Adaptability:</strong>
          <ul>
            <li>Maintains coherent, context-aware conversations across extended interactions</li>
            <li>Adapts communication style based on the user's preferences and needs</li>
            <li>Handles nuanced queries with appropriate depth and sensitivity</li>
          </ul>
      </ul>
            <h2>Dataset Creation Process:</h2>
      <p>The Aether-Lite-V1.8.1 dataset used for training L3-Aethora-15B v2 underwent a rigorous creation and curation process:</p>
      <ol>
        <li><strong>Data Collection:</strong> Aggregated from 12 diverse high-quality datasets, including:
          <ul>
            <li>jondurbin/airoboros-3.2</li>
            <li>jtatman/medical-sci-instruct-100k-sharegpt</li>
            <li>Doctor-Shotgun/no-robots-sharegpt</li>
            <li>QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT</li>
            <li>TheSkullery/WizardLM_evol_instruct_v2_Filtered_Fuzzy_Dedup_ShareGPT</li>
            <li>TheSkullery/Gryphe-Opus-WritingPrompts-merged</li>
            <li>Alignment-Lab-AI/RPGuild-sharegpt-filtered</li>
            <li>And others, providing a rich mix of instruction, creative writing, and specialized knowledge</li>
          </ul>
        </li>
        <li><strong>Data Preprocessing:</strong>
          <ul>
            <li>Language Detection: Utilized a FastText language model to ensure English-language content</li>
            <li>Text Sanitization: Cleaned and normalized text, removing or replacing problematic characters</li>
            <li>Phrase Filtering: Removed specific unwanted phrases and content types</li>
          </ul>
        </li>
        <li><strong>Deduplication:</strong>
          <ul>
            <li>Implemented advanced fuzzy deduplication with a 95% similarity threshold</li>
            <li>Utilized text embeddings and cosine similarity calculations for efficient comparison</li>
            <li>Removed 16,250 duplicate entries, ensuring dataset uniqueness</li>
          </ul>
        </li>
        <li><strong>Data Balancing:</strong>
          <ul>
            <li>Carefully sampled from each source dataset to maintain diversity</li>
            <li>Implemented data shuffling to ensure random distribution of samples</li>
          </ul>
      </ol>
      <p>The final dataset comprises 125,119 high-quality, diverse samples, striking a balance between creativity, practical knowledge, and intellectual depth.</p>
      <p>The full dataset used has been released to the public and is avalible for all (see presented section), any ideas or recomendations are always welcome to expand on the dataset further</p>
    </div>
  </div>
</body>
</html>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ZeusLabs__L3-Aethora-15B-V2)

|      Metric       |Value|
|-------------------|----:|
|Avg.               |24.57|
|IFEval (0-Shot)    |72.08|
|BBH (3-Shot)       |28.97|
|MATH Lvl 5 (4-Shot)| 7.33|
|GPQA (0-shot)      | 5.03|
|MuSR (0-shot)      | 6.25|
|MMLU-PRO (5-shot)  |27.78|