Papers
arxiv:2406.14051

How Many Parameters Does it Take to Change a Light Bulb? Evaluating Performance in Self-Play of Conversational Games as a Function of Model Characteristics

Published on Jun 20
· Submitted by sherzod-hakimov on Jun 25
Authors:
,

Abstract

What makes a good Large Language Model (LLM)? That it performs well on the relevant benchmarks -- which hopefully measure, with some validity, the presence of capabilities that are also challenged in real application. But what makes the model perform well? What gives a model its abilities? We take a recently introduced type of benchmark that is meant to challenge capabilities in a goal-directed, agentive context through self-play of conversational games, and analyse how performance develops as a function of model characteristics like number of parameters, or type of training. We find that while there is a clear relationship between number of parameters and performance, there is still a wide spread of performance points within a given size bracket, which is to be accounted for by training parameters such as fine-tuning data quality and method. From a more practical angle, we also find a certain degree of unpredictability about performance across access methods, possible due to unexposed sampling parameters, and a, very welcome, performance stability against at least moderate weight quantisation during inference.

Community

Paper author Paper submitter

The paper discusses the recipes for achieving high scores in a newly suggested benchmark called clembench, which is the way of benchmarking LLM using dialog games where each game can encapsulate various capabilities to be tested.

The paper outlines that models with large amount of parameters tend to perform better BUT

  1. it highly depends on the source of training data as well as the method for training RLHF, DPO, PPO, etc.
  2. Quantized models (8 bit) perform nearly as good as full precision models
  3. Not all platforms with API serve the same model

Graphic for the point #1.
image.png

Here are tables for comparing the same models on different API backends, and quantized model results. It can be seen the same Llama-3 models sometimes get different scores depending on which platform hosts the models. The next table shows that 8-bit quantized model variants are as good as full precision model weights.
image.png

Here is another table that shows the effect of fine-tuning base models
image.png

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.14051 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.14051 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.14051 in a Space README.md to link it from this page.

Collections including this paper 1