prithivida commited on
Commit
714dfcf
1 Parent(s): 9a079cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -44,10 +44,10 @@ pipeline_tag: sentence-similarity
44
  - [With Sentence Transformers:](#with-sentence-transformers)
45
  - [With Huggingface Transformers:](#with-huggingface-transformers)
46
  - [FAQs](#faqs)
47
- - [How can we run these models with out heavy torch dependency?](#how-can-we-run-these-models-with-out-heavy-torch-dependency)
48
- - [How do I optimise vector index cost?](#how-do-i-optimise-vector-index-cost)
49
- - [How do I offer hybrid search to address Vocabulary Mismatch Problem?](#how-do-i-offer)
50
- - [Why not run cMTEB?](#why-not-run-cmteb)
51
  - [Roadmap](#roadmap)
52
  - [Notes on Reproducing:](#notes-on-reproducing)
53
  - [Reference:](#reference)
@@ -144,14 +144,13 @@ for query, query_embedding in zip(queries, query_embeddings):
144
 
145
  # FAQs:
146
 
147
- #### How can we run these models with out heavy torch dependency?
 
148
 
149
- - You can use ONNX flavours of these models via [FlashRetrieve](https://github.com/PrithivirajDamodaran/FlashRetrieve) library.
150
-
151
- #### How do I optimise vector index cost ?
152
  [Use Binary and Scalar Quantisation](https://huggingface.co/blog/embedding-quantization)
153
 
154
- #### How do I offer hybrid search to address Vocabulary Mismatch Problem?
155
  MIRACL paper shows simply combining BM25 is a good starting point for a Hybrid option:
156
  The below numbers are with mDPR model, but miniMiracle_zh_v1 should give a even better hybrid performance.
157
 
 
44
  - [With Sentence Transformers:](#with-sentence-transformers)
45
  - [With Huggingface Transformers:](#with-huggingface-transformers)
46
  - [FAQs](#faqs)
47
+ - [How can I reduce overall inference cost ?](#how-can-i-reduce-overall-inference-cost)
48
+ - [How do I reduce vector storage cost?](#how-do-i-reduce-vector-storage-cost)
49
+ - [How do I offer hybrid search to improve accuracy?](#how-do-i-offer-hybrid-search-to-improve-accuracy)
50
+ - [Why not run MTEB?](#why-not-run-mteb)
51
  - [Roadmap](#roadmap)
52
  - [Notes on Reproducing:](#notes-on-reproducing)
53
  - [Reference:](#reference)
 
144
 
145
  # FAQs:
146
 
147
+ #### How can I reduce overall inference cost ?
148
+ - You can host these models without heavy torch dependency using the ONNX flavours of these models via [FlashRetrieve](https://github.com/PrithivirajDamodaran/FlashRetrieve) library.
149
 
150
+ #### How do I reduce vector storage cost ?
 
 
151
  [Use Binary and Scalar Quantisation](https://huggingface.co/blog/embedding-quantization)
152
 
153
+ #### How do I offer hybrid search to improve accuracy ?
154
  MIRACL paper shows simply combining BM25 is a good starting point for a Hybrid option:
155
  The below numbers are with mDPR model, but miniMiracle_zh_v1 should give a even better hybrid performance.
156