--- base_model: mobiuslabsgmbh/aanaphi2-v0.1 license: mit train: false inference: false language: - en model_creator: mobiuslabsgmbh model_name: aanaphi2-v0.1 model_type: phi pipeline_tag: text-generation quantized_by: brittlewis12 --- ![elephant-samurai](https://cdn-uploads.huggingface.co/production/uploads/636b945ef575d3705149e982/pIeboaaroFY5fpomUADrS.gif) # aanaphi2-v0.1 GGUF **Original model**: [aanaphi2-v0.1](https://huggingface.co/mobiuslabsgmbh/aanaphi2-v0.1) **Model creator**: [mobiuslabsgmbh](https://huggingface.co/mobiuslabsgmbh) This repo contains GGUF format model files for Mobius Labs’ aanaphi2-v0.1. > aanaphi2-v0.1 is a finetuned (SFT + DPO) chat model based on [Microsoft's Phi-2 base model](https://huggingface.co/microsoft/phi-2) (2.8B parameters). ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp build 2276 (revision [b11a93d](https://github.com/ggerganov/llama.cpp/commit/b11a93df41921846a10628a7c306d5c82a549939)) ### Prompt template ``` ### Human: {{prompt}} ### Assistant: ``` --- ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date --- ## Original Model Evaluation | Models | phi-2 | aanaphi2-v0.1 | |-------------------|------------------|------------------| | ARC (25-shot) | 61.09 | 63.74 | | HellaSwag (10-shot)| 75.11 | 78.30 | | MMLU (5-shot) | 58.11 | 57.70 | | TruthfulQA-MC2 | 44.47 | 51.56 | | Winogrande (5-shot)| 74.35 | 73.40 | | GSM8K (5-shot) | 54.81 | 58.61 | | Average | 61.33 | 63.89 |