# Model Description DistilBioBERT is a distilled version of the [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2?text=The+goal+of+life+is+%5BMASK%5D.) which is distilled for 100k training steps using a total batch size of 192 on the PubMed dataset. # Distillation Procedure This model uses a simple distillation technique, which tries to align the output distribution of the student model with the output distribution of the teacher model on the MLM objective. In addition, it optionally uses another alignment loss for aligning the last hidden state of the student and teacher. # Initialisation Following the [DistilBERT](https://huggingface.co/distilbert-base-uncased?text=The+goal+of+life+is+%5BMASK%5D.), for efficient initialising of the student, we take a subset of the larger model by using the same embedding weights and initialising the student from the teacher by taking weights from every other layer. # Architecture In this model, the size of the hidden dimension and the embedding layer are both set to 768. The vocabulary size is 28996 for the cased version which is the one employed in our experiments. The number of transformer layers is 6 and the expansion rate of the feed-forward layer is 4. Overall this model has around 65 million parameters. # Citation ```bibtex @misc{https://doi.org/10.48550/arxiv.2209.03182, doi = {10.48550/ARXIV.2209.03182}, url = {https://arxiv.org/abs/2209.03182}, author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Kouchaki, Samaneh and Clifton, David A.}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, 68T50}, title = {On the Effectiveness of Compact Biomedical Transformers}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```