license: mit | |
This model is a fine-tuned version of Llama2-7B using the RAG-LER (Retrieval Augmented Generation with LM-Enhanced Re-ranker) framework, as described in our paper. | |
## How to Get Started with the Model | |
```python | |
from transformers import AutoTokenizer, AutoModelForCausalLM | |
import torch | |
tokenizer = AutoTokenizer.from_pretrained("notoookay/ragler-llama2-7b") | |
model = AutoModelForCausalLM.from_pretrained("notoookay/ragler-llama2-7b", torch_dtype=torch.bfloat16, device_map="auto") | |
# Example usage | |
input_text = "### Instruction:\nAnswer the following question.\n\n### Input:\nQuestion:\nWhat is the capital of France?\n\n### Response:\n" | |
inputs = tokenizer(input_text, return_tensors="pt") | |
outputs = model.generate(**inputs, max_new_tokens=100) | |
print(tokenizer.decode(outputs[0])) | |
``` | |
The corresponding re-ranker supervised by this model can be found [here](https://huggingface.co/notoookay/ragler-llama2-7b-reranker). |