Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
jinaai
/
jina-bert-flash-implementation
like
2
Transformers
bert
custom_code
Inference Endpoints
🇪🇺 Region: EU
Model card
Files
Files and versions
Community
18
Train
Deploy
Use this model
refs/pr/17
jina-bert-flash-implementation
6 contributors
History:
107 commits
michael-guenther
fix: glu for non-flash-attn
c768124
6 months ago
README.md
1.89 kB
feat: added README
6 months ago
bert_padding.py
9.78 kB
reference the flash attention GitHub
6 months ago
block.py
17.4 kB
reference the flash attention GitHub
6 months ago
configuration_bert.py
5.77 kB
Porting v2 models to flash attention (#15)
6 months ago
convert_v2_weights.py
6.1 kB
feat: for converting v2, added lines to save model weights and print config
6 months ago
embedding.py
2.26 kB
clean up embeddings.py (#7)
6 months ago
mha.py
35.3 kB
reference the flash attention GitHub
6 months ago
mlp.py
8.05 kB
fix: glu for non-flash-attn
6 months ago
modeling_bert.py
33.4 kB
fix: glu for non-flash-attn
6 months ago
modeling_for_glue.py
10.7 kB
feat: assert return_dict
6 months ago
modeling_lora.py
12.3 kB
fix: use staticmethod istead of classmethod
6 months ago