Issue while finetuning embedding model because of use_reentrant = True
2
#10 opened 5 months ago
by
DamianS89
Optimize inference speed
5
#9 opened 5 months ago
by
CoolWP
OOM occurs in the process of converting the model to torchscript. I have a question about this issue.
1
#8 opened 5 months ago
by
LeeJungHoon
Add benchmark to MTEB
5
#7 opened 5 months ago
by
sam-gab
base model
16
#6 opened 5 months ago
by
ambivalent02
It is now working colab..
3
#5 opened 5 months ago
by
LeeJungHoon
中文Dense retrieval性能与BGE V1.5相比如何?
3
#3 opened 5 months ago
by
TianyuLLM
OOMS on 8 GB GPU, is it normal?
3
#2 opened 5 months ago
by
tanimazsin130