|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- databricks/databricks-dolly-15k |
|
language: |
|
- en |
|
metrics: |
|
- rouge |
|
base_model: |
|
- openai-community/gpt2 |
|
pipeline_tag: text-generation |
|
--- |
|
# SeqKD-gpt2-120M |
|
|
|
[paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm) |
|
|
|
**SeqKD-gpt2-120M** is a gpt2-base (120M) model distilled from [gpt2-xlarge (1.5B)](https://huggingface.co/MiniLLM/teacher-gpt2-1.5B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) with sequence-level forward KLD. |
|
|
|
It is used as a baseline for [MiniLLM](https://huggingface.co/MiniLLM/MiniLLM-gpt2-120M). |
|
|
|
## Other Baselines |
|
+ [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-gpt2-120M) |
|
+ [KD](https://huggingface.co/MiniLLM/KD-gpt2-120M) |
|
|
|
|
|
## Citation |
|
``` |
|
@inproceedings{minillm, |
|
title={MiniLLM: Knowledge Distillation of Large Language Models}, |
|
author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie}, |
|
booktitle={Proceedings of ICLR}, |
|
year={2024} |
|
} |
|
``` |