File size: 1,500 Bytes
fba2629 1a0d1d7 fba2629 d883c17 fba2629 d883c17 fba2629 d883c17 17fa51c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
Dont be upsetti, here, have some spaghetti! Att: A'eala <3
<p><strong><font size="5">Information</font></strong></p>
GPT4-X-Alpasta-30b working with Oobabooga's Text Generation Webui and KoboldAI.
<p>This is an attempt at improving Open Assistant's performance as an instruct while retaining its excellent prose. The merge consists of <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a> and <a href="https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor">Open Assistant's native fine-tune</a>.</p>
<p><strong><font size="5">Benchmarks</font></strong></p>
<p><strong><font size="4">FP16</font></strong></p>
<strong>Wikitext2</strong>: 4.6077961921691895
<strong>Ptb-New</strong>: 9.41549301147461
<strong>C4-New</strong>: 6.98392915725708
<p>Benchmarks brought to you by A'eala</p>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MetaIX__GPT4-X-Alpasta-30b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 57.85 |
| ARC (25-shot) | 63.05 |
| HellaSwag (10-shot) | 83.56 |
| MMLU (5-shot) | 57.71 |
| TruthfulQA (0-shot) | 51.52 |
| Winogrande (5-shot) | 78.22 |
| GSM8K (5-shot) | 30.48 |
| DROP (3-shot) | 40.38 |
|