Sheared-LLaMA-1.3B / README.md
princeton-nlp's picture
Update README.md
e19f71e
|
raw
history blame
No virus
568 Bytes
metadata
license: apache-2.0

Sheared-LLaMA-1.3B is a model pruned and further pre-trained from meta-llama/Llama-2-7b-hf. We dynamically load data from different domains in the RedPajama dataset to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via

model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B")