emozilla commited on
Commit
8e7b4ae
1 Parent(s): 20dd05e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ ---
4
+
5
+ ## emozilla/landmark-llama-7b
6
+
7
+ This model is an out-of-the-box ready version of the LLaMA-7B variant of [Landmark Attention](https://arxiv.org/abs/2305.16300).
8
+ The original code is modified from the [Landmark GitHub](https://github.com/epfml/landmark-attention) and the weights from [here](https://huggingface.co/epfml/landmark-attention-llama7b-wdiff).
9
+
10
+ As a LLaMA variant, this model may be subject to the LLaMA license.
11
+
12
+ ### To use
13
+
14
+ ```python
15
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
16
+ import torch
17
+
18
+ tokenizer = AutoTokenizer.from_pretrained("emozilla/landmark-llama-7b", use_fast=False)
19
+ model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \
20
+ torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
21
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
22
+
23
+ print(pipe("Somebody once told me the world is gonna roll me", \
24
+ max_new_tokens=256, temperature=0.8, do_sample=True))
25
+ ```
26
+
27
+ You can configure the Landmark parameters by editing `mem_freq`, `mem_top_k`, `mem_max_seq_len`, and `mem_max_cache_size`.
28
+
29
+ ```python
30
+ config = AutoConfig.from_pretrained("emozilla/landmark-llama-7b", trust_remote_code=True)
31
+ config.mem_top_k = 6
32
+ model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \
33
+ torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", config=config)
34
+ ```