|
--- |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
license: apache-2.0 |
|
--- |
|
|
|
# Code Explainer |
|
|
|
The model does its best to explain python code in plain language |
|
|
|
|
|
# Model Details |
|
Trained by: trained by AllStax Technologies |
|
Model type: CodeExplainer-7b-v0.1 is a language model based on mistralai/Mistral-7B-v0.1. |
|
Language(s): English |
|
We fine-tuned using a data generated by GPT-3.5 and other models. |
|
|
|
# Prompting |
|
Prompt Template for alpaca style |
|
``` |
|
### Instruction: |
|
|
|
<prompt> |
|
|
|
### Response: |
|
``` |
|
|
|
# Loading the model |
|
``` |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
model_id = "allstax/CodeExplainer-7b-v0.1" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
quant_model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto') |
|
``` |