Possibility of 13b weights?

#4
by andysalerno - opened

Hi, first of all, thank you for the many models you have generated and shared!

Are there any plans to perform the same quantizing for the 13b lora released by chansung? I mean this one of these two:

https://huggingface.co/chansung/gpt4-alpaca-lora-13b
https://huggingface.co/chansung/gpt4-alpaca-lora-13b-decapoda-1024

I'm asking because this model (TheBloke/gpt4-alpaca-lora-30B-GPTQ-4bit-128g) is the best one I have used so far, and I've used many :D But unfortunately it only fits on an A5000 instance I am renting, while a 13b version should fit on my 3080 12GB.

Normally I hate when internet strangers make a request like "please do this thing for me for free!" but you seem to have the resources, so just figured I'd ask if this is something you're planning.

Hi there. You're welcome, glad they've been helpful.

I'd be glad to do GPTQs of those. To be honest I hadn't noticed they'd been put up at the same time.

It will be interesting to see how they compare to the current leaders in the 13B field, Vicuna 1.1 and Koala.

I'll try to get them up by later today.

Thanks, it's much appreciated :)

andysalerno changed discussion status to closed

Sign up or log in to comment