Chanukya Patnaik commited on
Commit
d58c62e
1 Parent(s): 5feaf84

updated readme

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  library_name: peft
3
  ---
 
 
 
4
  ## Training procedure
5
 
6
 
@@ -18,3 +21,4 @@ The following `bitsandbytes` quantization config was used during training:
18
 
19
 
20
  - PEFT 0.5.0.dev0
 
 
1
  ---
2
  library_name: peft
3
  ---
4
+ effi 7b is a 7 billion parameter model built by AI Planet. Inspired by llama, we've built fine-tuned version of llama7b with qlora. The training procedure and framework versions are provided below along with model weighths.
5
+
6
+
7
  ## Training procedure
8
 
9
 
 
21
 
22
 
23
  - PEFT 0.5.0.dev0
24
+