request error: error sending request for url (https://huggingface.co/BAAI/bge-reranker-v2-m3/resolve/main/config.json):
#24 opened 1 day ago
by
qinrong
Sagemaker deployment to GPU
#23 opened 1 day ago
by
chaitanya87
Bad performance of bge-reranker-v2-gemma compare with bge-reranker-v2-m3
2
#22 opened 3 days ago
by
shaunxu
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6411a1cdc11e5b7a765282d3/TVcO68Xql7NCRKvKzHhJd.jpeg)
how to deploy BAAI/bge-reranker-v2-m3 on TEI??
#21 opened 3 days ago
by
qinrong
fine-tuning with evaluator
#20 opened 11 days ago
by
praveensonu
qps100以上 推荐下显卡
#18 opened about 1 month ago
by
duzhihua
Missing Finetuning instruction for bge-reranker-v2-m3 ?
1
#17 opened about 1 month ago
by
jackkwok
Multi-GPU at FP16? Examples. Large memory allocations.
1
#16 opened about 2 months ago
by
flash9001
How to make it run on GPU?
1
#15 opened about 2 months ago
by
HarshalPa
Add Sentence Transformers config
#14 opened 2 months ago
by
peakji
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1673707677603-noauth.jpeg)
ONNX version
#13 opened 3 months ago
by
Malithius
Anyway to 'drop' model to save GPU ram?
1
#12 opened 3 months ago
by
rag-perplexity
cutoff score to consider for LLM call
4
#11 opened 3 months ago
by
karthikfds
![](https://cdn-avatars.huggingface.co/v1/production/uploads/61a725e6fcbbebe775bf17a5/ae-PoQ7WtwpyiqPlV0DwS.jpeg)
bf16 vs fp16
1
#10 opened 3 months ago
by
Totole
Document length for v2-m3?
3
#9 opened 3 months ago
by
rag-perplexity
请问该v2-m3最大支持多少token
1
#8 opened 3 months ago
by
devillaws
有什么加速的方案吗?
2
#7 opened 3 months ago
by
hanswang1973
corss-lingual reranking
2
#6 opened 3 months ago
by
victorkeke
支持在langchain框架下使用吗
2
#4 opened 3 months ago
by
Nicole828
Missing pytorch_model.bin file?
1
#3 opened 3 months ago
by
baobo5625
need onnx model
#1 opened 4 months ago
by
LowPower