prithivida commited on
Commit
2f80623
1 Parent(s): c1fd189

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -45,7 +45,7 @@ pipeline_tag: sentence-similarity
45
 
46
  - [License and Terms:](#license-and-terms)
47
  - [Detailed comparison & Our Contribution:](#detailed-comparison--our-contribution)
48
- - [ONNX & GGUF Variants:](#detailed-comparison--our-contribution)
49
  - [Usage:](#usage)
50
  - [With Sentence Transformers:](#with-sentence-transformers)
51
  - [With Huggingface Transformers:](#with-huggingface-transformers)
@@ -91,6 +91,13 @@ Full set of evaluation numbers for our model
91
 
92
  <br/>
93
 
 
 
 
 
 
 
 
94
  # Usage:
95
 
96
  #### With Sentence Transformers:
@@ -167,7 +174,7 @@ The below numbers are with mDPR model, but miniMiracle_zh_v1 should give a even
167
  *Note: MIRACL paper shows a different (higher) value for BM25 Chinese, So we are taking that value from BGE-M3 paper, rest all are form the MIRACL paper.*
168
 
169
  #### cMTEB numbers:
170
- CMTEB is a general purpose embedding evaluation bechmark covering wide range of tasks, but like BGE-M3, miniMiracle models are predominantly tuned for retireval tasks aimed at search & IR based usecases.
171
  We ran the retrieval slice of the cMTEB and add the scores here.
172
 
173
  We compared the performance few top general purpose embedding models on the C-MTEB benchmark. please refer to the C-MTEB leaderboard.
 
45
 
46
  - [License and Terms:](#license-and-terms)
47
  - [Detailed comparison & Our Contribution:](#detailed-comparison--our-contribution)
48
+ - [ONNX & GGUF Status:](#onnx-gguf-status)
49
  - [Usage:](#usage)
50
  - [With Sentence Transformers:](#with-sentence-transformers)
51
  - [With Huggingface Transformers:](#with-huggingface-transformers)
 
91
 
92
  <br/>
93
 
94
+ # ONNX & GGUF Status:
95
+
96
+ |Variant| Status |
97
+ |:---:|:---:|
98
+ |FP16 ONNX | ✅ |
99
+ |GGUF | WIP|
100
+
101
  # Usage:
102
 
103
  #### With Sentence Transformers:
 
174
  *Note: MIRACL paper shows a different (higher) value for BM25 Chinese, So we are taking that value from BGE-M3 paper, rest all are form the MIRACL paper.*
175
 
176
  #### cMTEB numbers:
177
+ CMTEB is a general purpose embedding evaluation benchmark covering wide range of tasks, but like BGE-M3, miniMiracle models are predominantly tuned for retireval tasks aimed at search & IR based usecases.
178
  We ran the retrieval slice of the cMTEB and add the scores here.
179
 
180
  We compared the performance few top general purpose embedding models on the C-MTEB benchmark. please refer to the C-MTEB leaderboard.