Steelskull commited on
Commit
8d61eda
1 Parent(s): 0bbb5b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -108,14 +108,14 @@ library_name: transformers
108
  <p>L3-Aethora-15B v2 is an advanced language model built upon the Llama 3 architecture. It employs state-of-the-art training techniques and a curated dataset to deliver enhanced performance across a wide range of tasks.</p>
109
  <h4>Quants:</h4>
110
  <ul>
111
- <p>GGUF:</p>
112
  <li>@Mradermacher: <a href="https://huggingface.co/mradermacher/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF</a></li>
113
  <li>@Bullerwins: <a href="https://huggingface.co/bullerwins/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF</a></li>
114
- <p>IMatrix-GGUF:</p>
115
  <li>@Mradermacher: <a href="https://huggingface.co/mradermacher/L3-Aethora-15B-V2-i1-GGUF" target="_blank">L3-Aethora-15B-V2-i1-GGUF</a></li>
116
- <p>GGUF-F16: (both f16.q6 and f16.q5 are smaller than q8 and perform as well as the pure f16)</p>
117
  <li>@MZeroWw: <a href="https://huggingface.co/ZeroWw/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF-f16</a></li>
118
- <p>EXL2:</p>
119
  <li>@Bullerwins: <a href="https://huggingface.co/collections/bullerwins/l3-aethora-15b-v2-exl2-667d1f4c0204c59594ca79ae" target="_blank">L3-Aethora-15B-V2-EXL2</a></li>
120
  </ul>
121
  <h2>Training Process:</h2>
 
108
  <p>L3-Aethora-15B v2 is an advanced language model built upon the Llama 3 architecture. It employs state-of-the-art training techniques and a curated dataset to deliver enhanced performance across a wide range of tasks.</p>
109
  <h4>Quants:</h4>
110
  <ul>
111
+ <p>GGUF:</p>
112
  <li>@Mradermacher: <a href="https://huggingface.co/mradermacher/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF</a></li>
113
  <li>@Bullerwins: <a href="https://huggingface.co/bullerwins/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF</a></li>
114
+ <p>IMatrix-GGUF:</p>
115
  <li>@Mradermacher: <a href="https://huggingface.co/mradermacher/L3-Aethora-15B-V2-i1-GGUF" target="_blank">L3-Aethora-15B-V2-i1-GGUF</a></li>
116
+ <p>GGUF-F16: (both f16.q6 and f16.q5 are smaller than q8 and perform as well as the pure f16)</p>
117
  <li>@MZeroWw: <a href="https://huggingface.co/ZeroWw/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF-f16</a></li>
118
+ <p>EXL2:</p>
119
  <li>@Bullerwins: <a href="https://huggingface.co/collections/bullerwins/l3-aethora-15b-v2-exl2-667d1f4c0204c59594ca79ae" target="_blank">L3-Aethora-15B-V2-EXL2</a></li>
120
  </ul>
121
  <h2>Training Process:</h2>