bnjmnmarie commited on
Commit
5bcb181
1 Parent(s): 3dd806c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ ---
6
+ # Model Card for Model ID
7
+
8
+ This is Meta's Llama 2 7B quantized in 3-bit using AutoGPTQ from Hugging Face Transformers.
9
+ ## Model Details
10
+
11
+ ### Model Description
12
+
13
+ <!-- Provide a longer summary of what this model is. -->
14
+
15
+
16
+
17
+ - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
18
+ - **Model type:** Causal (Llama 2)
19
+ - **Language(s) (NLP):** English
20
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), [Llama 2 license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
21
+
22
+ ### Model Sources
23
+
24
+ The method and code used to quantize the model are explained here:
25
+ [Quantize and Fine-tune LLMs with GPTQ Using Transformers and TRL](https://kaitchup.substack.com/p/quantize-and-fine-tune-llms-with)
26
+
27
+ ## Uses
28
+
29
+ This model is pre-trained and not fine-tuned. You may fine-tune it with PEFT using adapters.
30
+
31
+ ## Other versions
32
+
33
+ - [kaitchup/Llama-2-7b-gptq-4bit](https://huggingface.co/kaitchup/Llama-2-7b-gptq-4bit)
34
+ - [kaitchup/Llama-2-7b-gptq-2bit](https://huggingface.co/kaitchup/Llama-2-7b-gptq-2bit)
35
+
36
+
37
+
38
+
39
+ ## Model Card Contact
40
+
41
+ [The Kaitchup](https://kaitchup.substack.com/)
42
+
43
+