leaderboard / LEADERBOARD.md
Jae-Won Chung
Clarify what Data is
2b5e97a
|
raw
history blame
4.52 kB

The goal of the ML.ENERGY Leaderboard is to give people a sense of how much energy LLMs would consume.

Columns

  • gpu: NVIDIA GPU model name. Note that NLP evaluation was only run once on our A40 GPUs, so this column only changes system-level measurements like latency and energy.
  • task: Name of the task. See Tasks below for details.
  • energy (J): The average GPU energy consumed by the model to generate a response.
  • throughput (token/s): The average number of tokens generated per second.
  • latency (s): The average time it took for the model to generate a response.
  • response_length (token): The average number of tokens in the model's response.
  • parameters: The number of parameters the model has, in units of billion.

Tasks

For each task, every model uses the same system prompt. We still account for differences in roles, e.g. USER, HUMAN, ASSISTANT, GPT.

Name System prompt
chat A chat between a human user (prompter) and an artificial intelligence (AI) assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
chat-concise A chat between a human user (prompter) and an artificial intelligence (AI) assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant's answers are very concise.
instruct Below is an instruction that describes a task. Write a response that appropriately completes the request.
instruct-concise Below is an instruction that describes a task. Write a response that appropriately completes the request. The response should be very concise.

You can see that response length is shorter on average for the -concise variants of the tasks. This affects the number of decoding iterations the model has to run in order to finish responding, thus affecting latency and energy consumption per prompt.

Setup

Find our benchmark script for one model here.

Software

Hardware

  • NVIDIA A40 GPU
  • NVIDIA A100 GPU

Parameters

  • Model
    • Batch size 1
    • FP16
  • Sampling (decoding)
    • Greedy sampling from multinomial distribution
    • Temperature 0.7
    • Repetition penalty 1.0

Data used for benchmarking

We randomly sampled around 3000 prompts from the cleaned ShareGPT dataset. See here for more detail on how we created the benchmark dataset.

NLP evaluation metrics

  • arc: AI2 Reasoning Challenge's challenge dataset, measures capability to do grade-school level question answering, 25 shot
  • hellaswag: HellaSwag dataset, measuring grounded commonsense, 10 shot
  • truthfulqa: TruthfulQA dataset, measuring truthfulness against questions that elicit common falsehoods, 0 shot

Limitations

Currently, inference is run with basically bare PyTorch with batch size 1, which is unrealistic assuming a production serving scenario. Hence, absolute latency, throughput, and energy numbers should not be used to estimate figures in real production settings, while relative comparison makes some sense.

Upcoming

  • Within the Summer, we'll add an LLM Arena for energy consumption!
  • More optimized inference runtimes, like TensorRT.
  • Larger models with distributed inference, like Falcon 40B.
  • More GPU models, like V100.
  • More models, like RWKV.

License

This leaderboard is a research preview intended for non-commercial use only. Model weights were taken as is from the Hugging Face Hub if available and are subject to their licenses. The use of LLaMA weights are subject to their license. Please direct inquiries/reports of potential violation to Jae-Won Chung.

Acknowledgements

We thank Chameleon Cloud for the A100 80GB GPU nodes (gpu_a100_pcie) and CloudLab for the V100 GPU nodes (r7525).