|
Dont be upsetti, here, have some spaghetti! Att: A'eala <3 |
|
|
|
<p><strong><font size="5">Information</font></strong></p> |
|
GPT4-X-Alpasta-30b working with Oobabooga's Text Generation Webui and KoboldAI. |
|
<p>This is an attempt at improving Open Assistant's performance as an instruct while retaining its excellent prose. The merge consists of <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a> and <a href="https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor">Open Assistant's native fine-tune</a>.</p> |
|
|
|
<p><strong><font size="5">Benchmarks</font></strong></p> |
|
|
|
<p><strong><font size="4">FP16</font></strong></p> |
|
|
|
<strong>Wikitext2</strong>: 4.6077961921691895 |
|
|
|
<strong>Ptb-New</strong>: 9.41549301147461 |
|
|
|
<strong>C4-New</strong>: 6.98392915725708 |
|
|
|
<p>Benchmarks brought to you by A'eala</p> |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MetaIX__GPT4-X-Alpasta-30b) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 57.85 | |
|
| ARC (25-shot) | 63.05 | |
|
| HellaSwag (10-shot) | 83.56 | |
|
| MMLU (5-shot) | 57.71 | |
|
| TruthfulQA (0-shot) | 51.52 | |
|
| Winogrande (5-shot) | 78.22 | |
|
| GSM8K (5-shot) | 30.48 | |
|
| DROP (3-shot) | 40.38 | |
|
|