TheBloke commited on
Commit
0fd4c44
1 Parent(s): 5d29fb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
2
  inference: false
3
  license: other
 
 
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -28,12 +32,23 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
28
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
  * [ctransformers](https://github.com/marella/ctransformers)
30
 
 
 
31
  ## Repositories available
32
 
33
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/baichuan-llama-7B-GPTQ)
34
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/baichuan-llama-7B-GGML)
35
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fireballoon/baichuan-llama-7b)
36
 
 
 
 
 
 
 
 
 
 
37
  <!-- compatibility_ggml start -->
38
  ## Compatibility
39
 
 
1
  ---
2
  inference: false
3
  license: other
4
+ language:
5
+ - zh
6
+ - en
7
+ pipeline_tag: text-generation
8
  ---
9
 
10
  <!-- header start -->
 
32
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
33
  * [ctransformers](https://github.com/marella/ctransformers)
34
 
35
+ This model is a Llama conversion of [Baichuan Inc's Baichuan 7B]https://huggingface.co/baichuan-inc/baichuan-7B). It contains the same data, but rewritten by Fire Balloon into the familiar Llama format.
36
+
37
  ## Repositories available
38
 
39
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/baichuan-llama-7B-GPTQ)
40
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/baichuan-llama-7B-GGML)
41
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fireballoon/baichuan-llama-7b)
42
 
43
+ ## Prompt template
44
+
45
+ A general prompt template is unknown at this point.
46
+
47
+ The example given in the README is a 1-shot categorisation:
48
+ ```
49
+ Hamlet->Shakespeare\nOne Hundred Years of Solitude->
50
+ ```
51
+
52
  <!-- compatibility_ggml start -->
53
  ## Compatibility
54