base_model: liminerity/Memgpt-3x7b-MOE | |
inference: false | |
language: | |
- en | |
library_name: transformers | |
license: apache-2.0 | |
merged_models: | |
- starsnatched/MemGPT-DPO | |
- starsnatched/MemGPT-3 | |
- starsnatched/MemGPT | |
pipeline_tag: text-generation | |
quantized_by: Suparious | |
tags: | |
- 4-bit | |
- AWQ | |
- text-generation | |
- autotrain_compatible | |
- endpoints_compatible | |
- safetensors | |
- moe | |
- frankenmoe | |
- merge | |
- mergekit | |
- lazymergekit | |
- starsnatched/MemGPT-DPO | |
- starsnatched/MemGPT-3 | |
- starsnatched/MemGPT | |
# liminerity/Memgpt-3x7b-MOE AWQ | |
- Model creator: [liminerity](https://huggingface.co/liminerity) | |
- Original model: [Memgpt-3x7b-MOE](https://huggingface.co/liminerity/Memgpt-3x7b-MOE) | |
## Model Summary | |
Memgpt-3x7b-MOE is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): | |
* [starsnatched/MemGPT-DPO](https://huggingface.co/starsnatched/MemGPT-DPO) | |
* [starsnatched/MemGPT-3](https://huggingface.co/starsnatched/MemGPT-3) | |
* [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT) | |