prithivida commited on
Commit
74dfcda
1 Parent(s): ea2ea72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -3
README.md CHANGED
@@ -1,3 +1,135 @@
1
- ---
2
- license: cc-by-nc-nd-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ language:
4
+ - zh
5
+ datasets:
6
+ - MIRACL
7
+ tags:
8
+ - miniMiracle
9
+ - passage-retrieval
10
+ - knowledge-distillation
11
+ - middle-training
12
+ pretty_name: >-
13
+ miniMiracle is a family of High-quality, Light Weight and Easy deploy
14
+ multilingual embedders / retrievers, primarily focussed on Indo-Aryan and
15
+ Indo-Dravidin Languages.
16
+ library_name: transformers
17
+ pipeline_tag: sentence-similarity
18
+ ---
19
+
20
+
21
+ <center>
22
+ <img src="./logo.png" width=150/>
23
+ <img src="./zh_intro.png" width=120%/>
24
+ </center>
25
+
26
+
27
+ <center>
28
+ <img src="./zh_metrics_1.png" width=90%/>
29
+ <b><p>Table 1: Chinese retrieval performance on the MIRACL dev set (measured by nDCG@10)</p></b>
30
+ </center>
31
+
32
+
33
+ <br/>
34
+
35
+ <center>
36
+ <h1> Table Of Contents </h1>
37
+ </center>
38
+
39
+
40
+ - [License and Terms:](#license-and-terms)
41
+ - [Detailed comparison & Our Contribution:](#detailed-comparison--our-contribution)
42
+ - [ONNX & GGUF Variants:](#detailed-comparison--our-contribution)
43
+ - [Usage:](#usage)
44
+ - [With Sentence Transformers:](#with-sentence-transformers)
45
+ - [With Huggingface Transformers:](#with-huggingface-transformers)
46
+ - [How do I optimise vector index cost?](#how-do-i-optimise-vector-index-cost)
47
+ - [How do I offer hybrid search to address Vocabulary Mismatch Problem?](#how-do-i-offer)
48
+ - [Notes on Reproducing:](#notes-on-reproducing)
49
+ - [Reference:](#reference)
50
+ - [Note on model bias](#note-on-model-bias)
51
+
52
+
53
+ ## License and Terms:
54
+
55
+ <center>
56
+ <img src="./terms.png" width=200%/>
57
+ </center>
58
+
59
+
60
+ ## Detailed comparison & Our Contribution:
61
+
62
+ English language famously have **all-minilm** series models which were great for quick experimentations and for certain production workloads. The Idea is to have same for the other popular langauges, starting with Indo-Aryan and Indo-Dravidian languages. Our innovation is in bringing high quality models which easy to serve and embeddings are cheaper to store without ANY pretraining or expensive finetuning. For instance, **all-minilm** are finetuned on 1-Billion pairs. We offer a very lean model but with a huge vocabulary - around 250K.
63
+ We will add more details here.
64
+
65
+
66
+ <center>
67
+ <img src="./zh_metrics_2.png" width=120%/>
68
+ <b><p>Table 2: Detailed Chinese retrieval performance on the MIRACL dev set (measured by nDCG@10)</p></b>
69
+
70
+ </center>
71
+
72
+ Full set of evaluation numbers for our model
73
+
74
+ ```python
75
+ {'NDCG@1': 0.43511, 'NDCG@3': 0.42434, 'NDCG@5': 0.45298, 'NDCG@10': 0.50914, 'NDCG@100': 0.5815, 'NDCG@1000': 0.59392}
76
+ {'MAP@1': 0.21342, 'MAP@3': 0.32967, 'MAP@5': 0.36798, 'MAP@10': 0.39908, 'MAP@100': 0.42592, 'MAP@1000': 0.42686}
77
+ {'Recall@10': 0.63258, 'Recall@50': 0.85, 'Recall@100': 0.91595, 'Recall@200': 0.942, 'Recall@500': 0.96924, 'Recall@1000': 0.9857}
78
+ {'P@1': 0.43511, 'P@3': 0.29177, 'P@5': 0.22545, 'P@10': 0.14758, 'P@100': 0.02252, 'P@1000': 0.00249}
79
+ {'MRR@10': 0.55448, 'MRR@100': 0.56288, 'MRR@1000': 0.56294}
80
+ ```
81
+
82
+ <br/>
83
+
84
+ ## Usage:
85
+
86
+ #### With Sentence Transformers:
87
+
88
+ ```python
89
+
90
+ ```
91
+
92
+ #### With Huggingface Transformers:
93
+ - T.B.A
94
+
95
+ #### How do I optimise vector index cost ?
96
+ [Use Binary and Scalar Quantisation](https://huggingface.co/blog/embedding-quantization)
97
+
98
+ <h4>How do I offer hybrid search to address Vocabulary Mismatch Problem?</h4>
99
+ MIRACL paper shows simply combining BM25 is a good starting point for a Hybrid option:
100
+ The below numbers are with mDPR model, but miniMiracle_zh_v1 should give a even better hybrid performance.
101
+
102
+ | Language | ISO | nDCG@10 BM25 | nDCG@10 mDPR | nDCG@10 Hybrid |
103
+ |-----------|-----|--------------|--------------|----------------|
104
+ | **Chinese** | **zh** | **0.175** | **0.512** | **0.526** |
105
+
106
+ *Note: MIRACL paper shows a different (higher) value for BM25 Chinese, So we are taking that value from BGE-M3 paper, rest all are form the MIRACL paper.*
107
+
108
+ # Notes on reproducing:
109
+
110
+ We welcome anyone to reproduce our results. Here are some tips and observations:
111
+
112
+ - Use CLS Pooling and Inner Product.
113
+ - There *may be* minor differences in the numbers when reproducing, for instance BGE-M3 reports a nDCG@10 of 59.3 for MIRACL hindi and we Observed only 58.9.
114
+
115
+ Here are our numbers for the full hindi run on BGE-M3
116
+
117
+ ```python
118
+ {'NDCG@1': 0.49714, 'NDCG@3': 0.5115, 'NDCG@5': 0.53908, 'NDCG@10': 0.58936, 'NDCG@100': 0.6457, 'NDCG@1000': 0.65336}
119
+ {'MAP@1': 0.28845, 'MAP@3': 0.42424, 'MAP@5': 0.46455, 'MAP@10': 0.49955, 'MAP@100': 0.51886, 'MAP@1000': 0.51933}
120
+ {'Recall@10': 0.73032, 'Recall@50': 0.8987, 'Recall@100': 0.93974, 'Recall@200': 0.95763, 'Recall@500': 0.97813, 'Recall@1000': 0.9902}
121
+ {'P@1': 0.49714, 'P@3': 0.33048, 'P@5': 0.24629, 'P@10': 0.15543, 'P@100': 0.0202, 'P@1000': 0.00212}
122
+ {'MRR@10': 0.60893, 'MRR@100': 0.615, 'MRR@1000': 0.6151}
123
+ ```
124
+
125
+ Fair warning BGE-M3 is $ expensive to evaluate, probably that's why it's not part of any of the MTEB benchmarks.
126
+
127
+
128
+ # Reference:
129
+ - [All Cohere numbers are copied form here](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12)
130
+
131
+
132
+ # Note on model bias:
133
+ - Like any model this model might carry inherent biases from the base models and the datasets it was pretrained and finetuned on. Please use responsibly.
134
+
135
+