Edit model card

This is a GPTQ quantized version of airo-llongma-2-13B-16k

To run this model, make sure compress_pos_emb is set to 4 to apply proper rope scaling parameters. The max_ctx_len is 16384.

Branches:

  • main: 4 bits, groupsize 128, act order false
  • 4bit-32g-actorder: 4 bits, groupsize 32, act order true
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.