omidrohanian commited on
Commit
2898f9f
1 Parent(s): ace5e33

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,8 +1,8 @@
1
  # Model Description
2
- BioMobileBERT is the result of training the [MobileBERT-uncased](https://huggingface.co/google/mobilebert-uncased) model in a continual learning fashion for 200k training steps using a total batch size of 192 on the PubMed dataset.
3
 
4
  # Initialisation
5
- We initialise our model with the pre-trained checkpoints of the [MobileBERT-uncased](https://huggingface.co/google/mobilebert-uncased) model available on the Huggingface.
6
 
7
  # Architecture
8
  MobileBERT uses a 128-dimensional embedding layer followed by 1D convolutions to up-project its output to the desired hidden dimension expected by the transformer blocks. For each of these blocks, MobileBERT uses linear down-projection at the beginning of the transformer block and up-projection at its end, followed by a residual connection originating from the input of the block before down-projection. Because of these linear projections, MobileBERT can reduce the hidden size and hence the computational cost of multi-head attention and feed-forward blocks. This model additionally incorporates up to four feed-forward blocks in order to enhance its representation learning capabilities. Thanks to the strategically placed linear projections, a 24-layer MobileBERT (which is used in this work) has around 25M parameters.
 
1
  # Model Description
2
+ BioMobileBERT is the result of training the [MobileBERT-uncased](https://huggingface.co/google/mobilebert-uncased) model in a continual learning scenario for 200k training steps using a total batch size of 192 on the PubMed dataset.
3
 
4
  # Initialisation
5
+ We initialise our model with the pre-trained checkpoints of the [MobileBERT-uncased](https://huggingface.co/google/mobilebert-uncased) model available on Huggingface.
6
 
7
  # Architecture
8
  MobileBERT uses a 128-dimensional embedding layer followed by 1D convolutions to up-project its output to the desired hidden dimension expected by the transformer blocks. For each of these blocks, MobileBERT uses linear down-projection at the beginning of the transformer block and up-projection at its end, followed by a residual connection originating from the input of the block before down-projection. Because of these linear projections, MobileBERT can reduce the hidden size and hence the computational cost of multi-head attention and feed-forward blocks. This model additionally incorporates up to four feed-forward blocks in order to enhance its representation learning capabilities. Thanks to the strategically placed linear projections, a 24-layer MobileBERT (which is used in this work) has around 25M parameters.