Edit model card
TigerBot

A cutting-edge foundation for your very own LLM.

🌐 TigerBot • 🤗 Hugging Face

This is a 4-bit GPTQ version of the tigerbot-180b-research.

It was quantized to 4bit using: https://github.com/TigerResearch/TigerBot/tree/main/gptq

How to download and use this model in github: https://github.com/TigerResearch/TigerBot

Here are commands to clone the TigerBot and install.

conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt

Inference with command line interface

cd TigerBot/gptq
CUDA_VISIBLE_DEVICES=0,1 python tigerbot_infer.py ${MODEL_DIR} --wbits 4 --groupsize 128 --load "{MODEL_DIR}/tigerbot-4bit-128g-*.pt"
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.