MiniLLM-gpt2-760M / README.md
t1101675's picture
Update README.md
1eb1bba verified
|
raw
history blame
No virus
1.77 kB
metadata
license: apache-2.0
datasets:
  - databricks/databricks-dolly-15k
language:
  - en
metrics:
  - rouge
base_model:
  - openai-community/gpt2-large
pipeline_tag: text-generation

MiniLLM-gpt2-760M

paper | code

MiniLLM-gpt2-760M is a gpt2-large (760M) model distilled from gpt2-xlarge (1.5B) on databricks-dolly-15k

Note: MiniLLM requires a SFT model for initilization to perform the PPO optimization.

Evaluation

We ask GPT-4 to give scores for the generated responses of MiniLLM. The prompts are taken from databricks-dolly-15k (test set), self-instruct, and vicuna

Baseline Models

Citation

@inproceedings{minillm,
  title={MiniLLM: Knowledge Distillation of Large Language Models},
  author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
  booktitle={Proceedings of ICLR},
  year={2024}
}