Edit model card

[Request #59 – Click here for more context.]

Request description:
"An experimental model that turned really well. Scores high on Chai leaderboard (slerp8bv2 there). Feel smarter than average L3 merges for RP."

Model page:
R136a1/Bungo-L3-8B

Use with the latest version of KoboldCpp, or this alternative fork if you have issues.

Click here to expand/hide information:
⇲ General chart with relative quant performance.

Recommended read:

"Which GGUF is right for me? (Opinionated)" by Artefact2

Click the image to view full size. "Which GGUF is right for me? (Opinionated)" by Artefact2 - First Graph

image/webp

Downloads last month
17,627
GGUF

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API (serverless) has been turned off for this model.