Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
solidrust
/
Memgpt-3x7b-MOE-AWQ
like
0
Text Generation
Transformers
Safetensors
English
mixtral
4-bit precision
AWQ
Inference Endpoints
Mixture of Experts
frankenmoe
Merge
mergekit
lazymergekit
starsnatched/MemGPT-DPO
starsnatched/MemGPT-3
starsnatched/MemGPT
conversational
text-generation-inference
awq
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
liminerity/Memgpt-3x7b-MOE AWQ
Model Summary
liminerity/Memgpt-3x7b-MOE AWQ
Model creator:
liminerity
Original model:
Memgpt-3x7b-MOE
Model Summary
Memgpt-3x7b-MOE is a Mixure of Experts (MoE) made with the following models using
LazyMergekit
:
starsnatched/MemGPT-DPO
starsnatched/MemGPT-3
starsnatched/MemGPT
Downloads last month
3
Safetensors
Model size
2.7B params
Tensor type
I32
·
FP16
·
Inference API
Text Generation
Examples
Input a message to start chatting with
solidrust/Memgpt-3x7b-MOE-AWQ
.
Send
Inference API (serverless) has been turned off for this model.
JSON Output
Maximize
Merge of
starsnatched/MemGPT-DPO
starsnatched/MemGPT-3
starsnatched/MemGPT
Collection including
solidrust/Memgpt-3x7b-MOE-AWQ
3x7B AWQ
Collection
Mixture of experts 3 x 7B.
•
5 items
•
Updated
Apr 25