MaziyarPanahi commited on
Commit
f382516
โ€ข
1 Parent(s): 80338f8

Update README.md (#1)

Browse files

- Update README.md (743327f78e57d60ee74d1364a23c4bd318ae9e2d)

Files changed (1) hide show
  1. README.md +77 -3
README.md CHANGED
@@ -1,3 +1,77 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - transformers
12
+ - safetensors
13
+ - text-generation
14
+ - conversational
15
+ - function-calling
16
+ - text-generation-inference
17
+ - region:us
18
+ - text-generation
19
+ model_name: MaziyarPanahi/firefunction-v2-GGUF
20
+ base_model: fireworks-ai/firefunction-v2
21
+ inference: false
22
+ model_creator: fireworks-ai
23
+ pipeline_tag: text-generation
24
+ quantized_by: MaziyarPanahi
25
+ license: llama3
26
+ ---
27
+ # [MaziyarPanahi/firefunction-v2-GGUF](https://huggingface.co/MaziyarPanahi/firefunction-v2-GGUF)
28
+ - Model creator: [fireworks-ai](https://huggingface.co/fireworks-ai)
29
+ - Original model: [fireworks-ai/firefunction-v2](https://huggingface.co/fireworks-ai/firefunction-v2)
30
+
31
+ ## Description
32
+ [MaziyarPanahi/firefunction-v2-GGUF](https://huggingface.co/MaziyarPanahi/firefunction-v2-GGUF) contains GGUF format model files for [fireworks-ai/firefunction-v2](https://huggingface.co/fireworks-ai/firefunction-v2).
33
+
34
+ ### About GGUF
35
+
36
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
37
+
38
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
39
+
40
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
41
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
42
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
43
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
44
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
45
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
46
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
47
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
48
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
49
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
50
+
51
+ ## Special thanks
52
+
53
+ ๐Ÿ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
54
+
55
+ Original README
56
+ ---
57
+ # FireFunction V2: Fireworks Function Calling Model
58
+
59
+ [**Try on Fireworks**](https://fireworks.ai/models/fireworks/firefunction-v2) | [**API Docs**](https://readme.fireworks.ai/docs/function-calling) | [**Demo App**](https://functional-chat.vercel.app/) | [**Discord**](https://discord.gg/mMqQxvFD9A)
60
+
61
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/nJNtxLzWswBDKK1iOZblb.png" alt="firefunction" width="400"/>
62
+
63
+ FireFunction is a state-of-the-art function calling model with a commercially viable license. View detailed info in our [announcement blog](https://fireworks.ai/blog/firefunction-v2-launch-post). Key info and highlights:
64
+
65
+ **Comparison with other models:**
66
+ - Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations
67
+ - Trained on Llama 3 and retains Llama 3โ€™s conversation and instruction-following capabilities, scoring 0.84 vs Llama 3โ€™s 0.89 on MT bench
68
+ - Significant quality improvements over FireFunction v1 across the broad range of metrics
69
+
70
+
71
+ **General info:**
72
+
73
+ ๐Ÿพ Successor of the [FireFunction](https://fireworks.ai/models/fireworks/firefunction-v1) model
74
+
75
+ ๐Ÿ”† Support of parallel function calling (unlike FireFunction v1) and good instruction following
76
+
77
+ ๐Ÿ’ก Hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v2) platform at < 10% of the cost of GPT 4o and 2x the speed