BRisa 7B Instruct
This is an instruction model trained for good performance in Portuguese. The initial base is the Mistral 7B v2 Model (source). We utilized the JJhooww/Mistral-7B-v0.2-Base_ptbr version pre-trained on 1 billion tokens in Portuguese (source).
The base model has good performance in Portuguese but faces significant challenges following instructions. We therefore used the version mistralai/Mistral-7B-Instruct-v0.2 and fine-tuned it for responses in Portuguese, then merged it with the base JJhooww/Mistral-7B-v0.2-Base_ptbr (https://huggingface.co/JJhooww/Mistral-7B-v0.2-Base_ptbr).
Model Sources
- Demo: (Demonstracao da Versรฃo DPO)
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the ๐ Open Portuguese LLM Leaderboard
Metric | Value |
---|---|
Average | 66.19 |
ENEM Challenge (No Images) | 65.08 |
BLUEX (No Images) | 53.69 |
OAB Exams | 43.37 |
Assin2 RTE | 91.50 |
Assin2 STS | 73.61 |
FaQuAD NLI | 68.31 |
HateBR Binary | 74.28 |
PT Hate Speech Binary | 65.12 |
tweetSentBR | 60.77 |
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for J-LAB/BRisa-7B-Instruct-v0.2
Space using J-LAB/BRisa-7B-Instruct-v0.2 1
Evaluation results
- accuracy on ENEM Challenge (No Images)Open Portuguese LLM Leaderboard65.080
- accuracy on BLUEX (No Images)Open Portuguese LLM Leaderboard53.690
- accuracy on OAB ExamsOpen Portuguese LLM Leaderboard43.370
- f1-macro on Assin2 RTEtest set Open Portuguese LLM Leaderboard91.500
- pearson on Assin2 STStest set Open Portuguese LLM Leaderboard73.610
- f1-macro on FaQuAD NLItest set Open Portuguese LLM Leaderboard68.310
- f1-macro on HateBR Binarytest set Open Portuguese LLM Leaderboard74.280
- f1-macro on PT Hate Speech Binarytest set Open Portuguese LLM Leaderboard65.120
- f1-macro on tweetSentBRtest set Open Portuguese LLM Leaderboard60.770