|
# Model Description |
|
BioDistilBERT-cased is the result of training the [MobileBERT-uncased](https://huggingface.co/google/mobilebert-uncased) model in a continual learning fashion for 200k training steps using a total batch size of 192 on the PubMed dataset. |
|
|
|
# Architecture and Initialisation |
|
MobileBERT uses a 128-dimensional embedding layer followed by 1D convolutions to up-project its output to the desired hidden dimension expected by the transformer blocks. For each of these blocks, MobileBERT uses linear down-projection at the beginning of the transformer block and up-projection at its end, followed by a residual connection originating from the input of the block before down-projection. Because of these linear projections, MobileBERT can reduce the hidden size and hence the computational cost of multi-head attention and feed-forward blocks. This model additionally incorporates up to four feed-forward blocks in order to enhance its representation learning capabilities. Thanks to the strategically placed linear projections, a 24-layer MobileBERT (which is used in this work) has around 25M parameters. |
|
|
|
# Citation |
|
If you use this model, please consider citing the following paper: |
|
|
|
```bibtex |
|
@misc{https://doi.org/10.48550/arxiv.2209.03182, |
|
doi = {10.48550/ARXIV.2209.03182}, |
|
url = {https://arxiv.org/abs/2209.03182}, |
|
author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Kouchaki, Samaneh and Clifton, David A.}, |
|
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, 68T50}, |
|
title = {On the Effectiveness of Compact Biomedical Transformers}, |
|
publisher = {arXiv}, |
|
year = {2022}, |
|
copyright = {arXiv.org perpetual, non-exclusive license} |
|
} |
|
``` |