kr-manish commited on
Commit
ed4dc2e
1 Parent(s): 87534ae

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,823 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ metrics:
7
+ - cosine_accuracy@1
8
+ - cosine_accuracy@3
9
+ - cosine_accuracy@5
10
+ - cosine_accuracy@10
11
+ - cosine_precision@1
12
+ - cosine_precision@3
13
+ - cosine_precision@5
14
+ - cosine_precision@10
15
+ - cosine_recall@1
16
+ - cosine_recall@3
17
+ - cosine_recall@5
18
+ - cosine_recall@10
19
+ - cosine_ndcg@10
20
+ - cosine_mrr@10
21
+ - cosine_map@100
22
+ pipeline_tag: sentence-similarity
23
+ tags:
24
+ - sentence-transformers
25
+ - sentence-similarity
26
+ - feature-extraction
27
+ - generated_from_trainer
28
+ - dataset_size:160
29
+ - loss:MatryoshkaLoss
30
+ - loss:MultipleNegativesRankingLoss
31
+ widget:
32
+ - source_sentence: Priya Softweb emphasizes the importance of maintaining a clean
33
+ and organized workspace. The company's HR policies clearly state that employees
34
+ are responsible for keeping their assigned workspaces clean, orderly, and free
35
+ from unnecessary items. Spitting tobacco, gum, or other substances in the washrooms
36
+ is strictly prohibited. The company believes that a clean and organized work environment
37
+ contributes to a more efficient and professional work experience for everyone.
38
+ This emphasis on cleanliness reflects the company's commitment to creating a pleasant
39
+ and hygienic workspace for its employees.
40
+ sentences:
41
+ - What is Priya Softweb's policy on the use of mobile phones during work hours?
42
+ - What steps does Priya Softweb take to ensure that the workspace is clean and organized?
43
+ - What are the repercussions for employees who violate the Non-Disclosure Agreement
44
+ at Priya Softweb?
45
+ - source_sentence: Priya Softweb provides allocated basement parking facilities for
46
+ employees to park their two-wheelers and four-wheelers. However, parking on the
47
+ ground floor, around the lawn or main premises, is strictly prohibited as this
48
+ space is reserved for Directors. Employees should use the parking under wings
49
+ 5 and 6, while other parking spaces are allocated to different wings. Parking
50
+ two-wheelers in the car parking zone is not permitted, even if space is available.
51
+ Two-wheelers should be parked in the designated basement space on the main stand,
52
+ not on the side stand. Employees are encouraged to park in common spaces on a
53
+ first-come, first-served basis. The company clarifies that it is not responsible
54
+ for providing parking and that employees park their vehicles at their own risk.
55
+ This comprehensive parking policy ensures organized parking arrangements and clarifies
56
+ the company's liability regarding vehicle safety.
57
+ sentences:
58
+ - What is the application process for planned leaves at Priya Softweb?
59
+ - What are the parking arrangements at Priya Softweb?
60
+ - What is the process for reporting a security breach at Priya Softweb?
61
+ - source_sentence: The Diwali bonus at Priya Softweb is a discretionary benefit linked
62
+ to the company's business performance. Distributed during the festive season of
63
+ Diwali, it serves as a gesture of appreciation for employees' contributions throughout
64
+ the year. However, it's important to note that employees currently under the notice
65
+ period are not eligible for this bonus. This distinction highlights that the bonus
66
+ is intended to reward ongoing commitment and contribution to the company's success.
67
+ sentences:
68
+ - What steps does Priya Softweb take to promote responsible use of company resources?
69
+ - How does Priya Softweb demonstrate its commitment to Diversity, Equity, and Inclusion
70
+ (DEI)?
71
+ - What is the significance of the company's Diwali bonus at Priya Softweb?
72
+ - source_sentence: Priya Softweb's HR Manual paints a picture of a company that values
73
+ its employees while upholding a strong sense of professionalism and ethical conduct.
74
+ The company emphasizes a structured and transparent approach to its HR processes,
75
+ ensuring clarity and fairness in areas like recruitment, performance appraisals,
76
+ compensation, leave management, work-from-home arrangements, and incident reporting.
77
+ The manual highlights the importance of compliance with company policies, promotes
78
+ diversity and inclusion, and encourages a culture of continuous learning and development.
79
+ Overall, the message conveyed is one of creating a supportive, respectful, and
80
+ growth-oriented work environment for all employees.
81
+ sentences:
82
+ - What is the overall message conveyed by Priya Softweb's HR Manual?
83
+ - What is the process for reporting employee misconduct at Priya Softweb?
84
+ - What is Priya Softweb's policy on salary disbursement and payslips?
85
+ - source_sentence: No, work-from-home arrangements do not affect an employee's employment
86
+ terms, compensation, and benefits at Priya Softweb. This clarifies that work-from-home
87
+ is a flexible work arrangement and does not impact the employee's overall employment
88
+ status or benefits.
89
+ sentences:
90
+ - Do work-from-home arrangements affect compensation and benefits at Priya Softweb?
91
+ - What is the objective of the Work From Home Policy at Priya Softweb?
92
+ - What is the procedure for a new employee joining Priya Softweb?
93
+ model-index:
94
+ - name: SentenceTransformer based on BAAI/bge-base-en-v1.5
95
+ results:
96
+ - task:
97
+ type: information-retrieval
98
+ name: Information Retrieval
99
+ dataset:
100
+ name: dim 768
101
+ type: dim_768
102
+ metrics:
103
+ - type: cosine_accuracy@1
104
+ value: 0.8333333333333334
105
+ name: Cosine Accuracy@1
106
+ - type: cosine_accuracy@3
107
+ value: 1.0
108
+ name: Cosine Accuracy@3
109
+ - type: cosine_accuracy@5
110
+ value: 1.0
111
+ name: Cosine Accuracy@5
112
+ - type: cosine_accuracy@10
113
+ value: 1.0
114
+ name: Cosine Accuracy@10
115
+ - type: cosine_precision@1
116
+ value: 0.8333333333333334
117
+ name: Cosine Precision@1
118
+ - type: cosine_precision@3
119
+ value: 0.33333333333333326
120
+ name: Cosine Precision@3
121
+ - type: cosine_precision@5
122
+ value: 0.20000000000000004
123
+ name: Cosine Precision@5
124
+ - type: cosine_precision@10
125
+ value: 0.10000000000000002
126
+ name: Cosine Precision@10
127
+ - type: cosine_recall@1
128
+ value: 0.8333333333333334
129
+ name: Cosine Recall@1
130
+ - type: cosine_recall@3
131
+ value: 1.0
132
+ name: Cosine Recall@3
133
+ - type: cosine_recall@5
134
+ value: 1.0
135
+ name: Cosine Recall@5
136
+ - type: cosine_recall@10
137
+ value: 1.0
138
+ name: Cosine Recall@10
139
+ - type: cosine_ndcg@10
140
+ value: 0.923940541865081
141
+ name: Cosine Ndcg@10
142
+ - type: cosine_mrr@10
143
+ value: 0.898148148148148
144
+ name: Cosine Mrr@10
145
+ - type: cosine_map@100
146
+ value: 0.898148148148148
147
+ name: Cosine Map@100
148
+ - task:
149
+ type: information-retrieval
150
+ name: Information Retrieval
151
+ dataset:
152
+ name: dim 512
153
+ type: dim_512
154
+ metrics:
155
+ - type: cosine_accuracy@1
156
+ value: 0.8333333333333334
157
+ name: Cosine Accuracy@1
158
+ - type: cosine_accuracy@3
159
+ value: 1.0
160
+ name: Cosine Accuracy@3
161
+ - type: cosine_accuracy@5
162
+ value: 1.0
163
+ name: Cosine Accuracy@5
164
+ - type: cosine_accuracy@10
165
+ value: 1.0
166
+ name: Cosine Accuracy@10
167
+ - type: cosine_precision@1
168
+ value: 0.8333333333333334
169
+ name: Cosine Precision@1
170
+ - type: cosine_precision@3
171
+ value: 0.33333333333333326
172
+ name: Cosine Precision@3
173
+ - type: cosine_precision@5
174
+ value: 0.20000000000000004
175
+ name: Cosine Precision@5
176
+ - type: cosine_precision@10
177
+ value: 0.10000000000000002
178
+ name: Cosine Precision@10
179
+ - type: cosine_recall@1
180
+ value: 0.8333333333333334
181
+ name: Cosine Recall@1
182
+ - type: cosine_recall@3
183
+ value: 1.0
184
+ name: Cosine Recall@3
185
+ - type: cosine_recall@5
186
+ value: 1.0
187
+ name: Cosine Recall@5
188
+ - type: cosine_recall@10
189
+ value: 1.0
190
+ name: Cosine Recall@10
191
+ - type: cosine_ndcg@10
192
+ value: 0.923940541865081
193
+ name: Cosine Ndcg@10
194
+ - type: cosine_mrr@10
195
+ value: 0.898148148148148
196
+ name: Cosine Mrr@10
197
+ - type: cosine_map@100
198
+ value: 0.898148148148148
199
+ name: Cosine Map@100
200
+ - task:
201
+ type: information-retrieval
202
+ name: Information Retrieval
203
+ dataset:
204
+ name: dim 256
205
+ type: dim_256
206
+ metrics:
207
+ - type: cosine_accuracy@1
208
+ value: 0.8333333333333334
209
+ name: Cosine Accuracy@1
210
+ - type: cosine_accuracy@3
211
+ value: 1.0
212
+ name: Cosine Accuracy@3
213
+ - type: cosine_accuracy@5
214
+ value: 1.0
215
+ name: Cosine Accuracy@5
216
+ - type: cosine_accuracy@10
217
+ value: 1.0
218
+ name: Cosine Accuracy@10
219
+ - type: cosine_precision@1
220
+ value: 0.8333333333333334
221
+ name: Cosine Precision@1
222
+ - type: cosine_precision@3
223
+ value: 0.33333333333333326
224
+ name: Cosine Precision@3
225
+ - type: cosine_precision@5
226
+ value: 0.20000000000000004
227
+ name: Cosine Precision@5
228
+ - type: cosine_precision@10
229
+ value: 0.10000000000000002
230
+ name: Cosine Precision@10
231
+ - type: cosine_recall@1
232
+ value: 0.8333333333333334
233
+ name: Cosine Recall@1
234
+ - type: cosine_recall@3
235
+ value: 1.0
236
+ name: Cosine Recall@3
237
+ - type: cosine_recall@5
238
+ value: 1.0
239
+ name: Cosine Recall@5
240
+ - type: cosine_recall@10
241
+ value: 1.0
242
+ name: Cosine Recall@10
243
+ - type: cosine_ndcg@10
244
+ value: 0.9312144170634953
245
+ name: Cosine Ndcg@10
246
+ - type: cosine_mrr@10
247
+ value: 0.9074074074074076
248
+ name: Cosine Mrr@10
249
+ - type: cosine_map@100
250
+ value: 0.9074074074074073
251
+ name: Cosine Map@100
252
+ - task:
253
+ type: information-retrieval
254
+ name: Information Retrieval
255
+ dataset:
256
+ name: dim 128
257
+ type: dim_128
258
+ metrics:
259
+ - type: cosine_accuracy@1
260
+ value: 0.7777777777777778
261
+ name: Cosine Accuracy@1
262
+ - type: cosine_accuracy@3
263
+ value: 1.0
264
+ name: Cosine Accuracy@3
265
+ - type: cosine_accuracy@5
266
+ value: 1.0
267
+ name: Cosine Accuracy@5
268
+ - type: cosine_accuracy@10
269
+ value: 1.0
270
+ name: Cosine Accuracy@10
271
+ - type: cosine_precision@1
272
+ value: 0.7777777777777778
273
+ name: Cosine Precision@1
274
+ - type: cosine_precision@3
275
+ value: 0.33333333333333326
276
+ name: Cosine Precision@3
277
+ - type: cosine_precision@5
278
+ value: 0.20000000000000004
279
+ name: Cosine Precision@5
280
+ - type: cosine_precision@10
281
+ value: 0.10000000000000002
282
+ name: Cosine Precision@10
283
+ - type: cosine_recall@1
284
+ value: 0.7777777777777778
285
+ name: Cosine Recall@1
286
+ - type: cosine_recall@3
287
+ value: 1.0
288
+ name: Cosine Recall@3
289
+ - type: cosine_recall@5
290
+ value: 1.0
291
+ name: Cosine Recall@5
292
+ - type: cosine_recall@10
293
+ value: 1.0
294
+ name: Cosine Recall@10
295
+ - type: cosine_ndcg@10
296
+ value: 0.9107105144841319
297
+ name: Cosine Ndcg@10
298
+ - type: cosine_mrr@10
299
+ value: 0.8796296296296297
300
+ name: Cosine Mrr@10
301
+ - type: cosine_map@100
302
+ value: 0.8796296296296295
303
+ name: Cosine Map@100
304
+ - task:
305
+ type: information-retrieval
306
+ name: Information Retrieval
307
+ dataset:
308
+ name: dim 64
309
+ type: dim_64
310
+ metrics:
311
+ - type: cosine_accuracy@1
312
+ value: 0.6111111111111112
313
+ name: Cosine Accuracy@1
314
+ - type: cosine_accuracy@3
315
+ value: 0.9444444444444444
316
+ name: Cosine Accuracy@3
317
+ - type: cosine_accuracy@5
318
+ value: 0.9444444444444444
319
+ name: Cosine Accuracy@5
320
+ - type: cosine_accuracy@10
321
+ value: 1.0
322
+ name: Cosine Accuracy@10
323
+ - type: cosine_precision@1
324
+ value: 0.6111111111111112
325
+ name: Cosine Precision@1
326
+ - type: cosine_precision@3
327
+ value: 0.31481481481481477
328
+ name: Cosine Precision@3
329
+ - type: cosine_precision@5
330
+ value: 0.1888888888888889
331
+ name: Cosine Precision@5
332
+ - type: cosine_precision@10
333
+ value: 0.10000000000000002
334
+ name: Cosine Precision@10
335
+ - type: cosine_recall@1
336
+ value: 0.6111111111111112
337
+ name: Cosine Recall@1
338
+ - type: cosine_recall@3
339
+ value: 0.9444444444444444
340
+ name: Cosine Recall@3
341
+ - type: cosine_recall@5
342
+ value: 0.9444444444444444
343
+ name: Cosine Recall@5
344
+ - type: cosine_recall@10
345
+ value: 1.0
346
+ name: Cosine Recall@10
347
+ - type: cosine_ndcg@10
348
+ value: 0.826662566744103
349
+ name: Cosine Ndcg@10
350
+ - type: cosine_mrr@10
351
+ value: 0.7685185185185186
352
+ name: Cosine Mrr@10
353
+ - type: cosine_map@100
354
+ value: 0.7685185185185185
355
+ name: Cosine Map@100
356
+ ---
357
+
358
+ # SentenceTransformer based on BAAI/bge-base-en-v1.5
359
+
360
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
361
+
362
+ ## Model Details
363
+
364
+ ### Model Description
365
+ - **Model Type:** Sentence Transformer
366
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
367
+ - **Maximum Sequence Length:** 512 tokens
368
+ - **Output Dimensionality:** 768 tokens
369
+ - **Similarity Function:** Cosine Similarity
370
+ <!-- - **Training Dataset:** Unknown -->
371
+ <!-- - **Language:** Unknown -->
372
+ <!-- - **License:** Unknown -->
373
+
374
+ ### Model Sources
375
+
376
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
377
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
378
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
379
+
380
+ ### Full Model Architecture
381
+
382
+ ```
383
+ SentenceTransformer(
384
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
385
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
386
+ (2): Normalize()
387
+ )
388
+ ```
389
+
390
+ ## Usage
391
+
392
+ ### Direct Usage (Sentence Transformers)
393
+
394
+ First install the Sentence Transformers library:
395
+
396
+ ```bash
397
+ pip install -U sentence-transformers
398
+ ```
399
+
400
+ Then you can load this model and run inference.
401
+ ```python
402
+ from sentence_transformers import SentenceTransformer
403
+
404
+ # Download from the 🤗 Hub
405
+ model = SentenceTransformer("kr-manish/fine-tune-embedding-bge-base-HrPolicy")
406
+ # Run inference
407
+ sentences = [
408
+ "No, work-from-home arrangements do not affect an employee's employment terms, compensation, and benefits at Priya Softweb. This clarifies that work-from-home is a flexible work arrangement and does not impact the employee's overall employment status or benefits.",
409
+ 'Do work-from-home arrangements affect compensation and benefits at Priya Softweb?',
410
+ 'What is the objective of the Work From Home Policy at Priya Softweb?',
411
+ ]
412
+ embeddings = model.encode(sentences)
413
+ print(embeddings.shape)
414
+ # [3, 768]
415
+
416
+ # Get the similarity scores for the embeddings
417
+ similarities = model.similarity(embeddings, embeddings)
418
+ print(similarities.shape)
419
+ # [3, 3]
420
+ ```
421
+
422
+ <!--
423
+ ### Direct Usage (Transformers)
424
+
425
+ <details><summary>Click to see the direct usage in Transformers</summary>
426
+
427
+ </details>
428
+ -->
429
+
430
+ <!--
431
+ ### Downstream Usage (Sentence Transformers)
432
+
433
+ You can finetune this model on your own dataset.
434
+
435
+ <details><summary>Click to expand</summary>
436
+
437
+ </details>
438
+ -->
439
+
440
+ <!--
441
+ ### Out-of-Scope Use
442
+
443
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
444
+ -->
445
+
446
+ ## Evaluation
447
+
448
+ ### Metrics
449
+
450
+ #### Information Retrieval
451
+ * Dataset: `dim_768`
452
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
453
+
454
+ | Metric | Value |
455
+ |:--------------------|:-----------|
456
+ | cosine_accuracy@1 | 0.8333 |
457
+ | cosine_accuracy@3 | 1.0 |
458
+ | cosine_accuracy@5 | 1.0 |
459
+ | cosine_accuracy@10 | 1.0 |
460
+ | cosine_precision@1 | 0.8333 |
461
+ | cosine_precision@3 | 0.3333 |
462
+ | cosine_precision@5 | 0.2 |
463
+ | cosine_precision@10 | 0.1 |
464
+ | cosine_recall@1 | 0.8333 |
465
+ | cosine_recall@3 | 1.0 |
466
+ | cosine_recall@5 | 1.0 |
467
+ | cosine_recall@10 | 1.0 |
468
+ | cosine_ndcg@10 | 0.9239 |
469
+ | cosine_mrr@10 | 0.8981 |
470
+ | **cosine_map@100** | **0.8981** |
471
+
472
+ #### Information Retrieval
473
+ * Dataset: `dim_512`
474
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
475
+
476
+ | Metric | Value |
477
+ |:--------------------|:-----------|
478
+ | cosine_accuracy@1 | 0.8333 |
479
+ | cosine_accuracy@3 | 1.0 |
480
+ | cosine_accuracy@5 | 1.0 |
481
+ | cosine_accuracy@10 | 1.0 |
482
+ | cosine_precision@1 | 0.8333 |
483
+ | cosine_precision@3 | 0.3333 |
484
+ | cosine_precision@5 | 0.2 |
485
+ | cosine_precision@10 | 0.1 |
486
+ | cosine_recall@1 | 0.8333 |
487
+ | cosine_recall@3 | 1.0 |
488
+ | cosine_recall@5 | 1.0 |
489
+ | cosine_recall@10 | 1.0 |
490
+ | cosine_ndcg@10 | 0.9239 |
491
+ | cosine_mrr@10 | 0.8981 |
492
+ | **cosine_map@100** | **0.8981** |
493
+
494
+ #### Information Retrieval
495
+ * Dataset: `dim_256`
496
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
497
+
498
+ | Metric | Value |
499
+ |:--------------------|:-----------|
500
+ | cosine_accuracy@1 | 0.8333 |
501
+ | cosine_accuracy@3 | 1.0 |
502
+ | cosine_accuracy@5 | 1.0 |
503
+ | cosine_accuracy@10 | 1.0 |
504
+ | cosine_precision@1 | 0.8333 |
505
+ | cosine_precision@3 | 0.3333 |
506
+ | cosine_precision@5 | 0.2 |
507
+ | cosine_precision@10 | 0.1 |
508
+ | cosine_recall@1 | 0.8333 |
509
+ | cosine_recall@3 | 1.0 |
510
+ | cosine_recall@5 | 1.0 |
511
+ | cosine_recall@10 | 1.0 |
512
+ | cosine_ndcg@10 | 0.9312 |
513
+ | cosine_mrr@10 | 0.9074 |
514
+ | **cosine_map@100** | **0.9074** |
515
+
516
+ #### Information Retrieval
517
+ * Dataset: `dim_128`
518
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
519
+
520
+ | Metric | Value |
521
+ |:--------------------|:-----------|
522
+ | cosine_accuracy@1 | 0.7778 |
523
+ | cosine_accuracy@3 | 1.0 |
524
+ | cosine_accuracy@5 | 1.0 |
525
+ | cosine_accuracy@10 | 1.0 |
526
+ | cosine_precision@1 | 0.7778 |
527
+ | cosine_precision@3 | 0.3333 |
528
+ | cosine_precision@5 | 0.2 |
529
+ | cosine_precision@10 | 0.1 |
530
+ | cosine_recall@1 | 0.7778 |
531
+ | cosine_recall@3 | 1.0 |
532
+ | cosine_recall@5 | 1.0 |
533
+ | cosine_recall@10 | 1.0 |
534
+ | cosine_ndcg@10 | 0.9107 |
535
+ | cosine_mrr@10 | 0.8796 |
536
+ | **cosine_map@100** | **0.8796** |
537
+
538
+ #### Information Retrieval
539
+ * Dataset: `dim_64`
540
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
541
+
542
+ | Metric | Value |
543
+ |:--------------------|:-----------|
544
+ | cosine_accuracy@1 | 0.6111 |
545
+ | cosine_accuracy@3 | 0.9444 |
546
+ | cosine_accuracy@5 | 0.9444 |
547
+ | cosine_accuracy@10 | 1.0 |
548
+ | cosine_precision@1 | 0.6111 |
549
+ | cosine_precision@3 | 0.3148 |
550
+ | cosine_precision@5 | 0.1889 |
551
+ | cosine_precision@10 | 0.1 |
552
+ | cosine_recall@1 | 0.6111 |
553
+ | cosine_recall@3 | 0.9444 |
554
+ | cosine_recall@5 | 0.9444 |
555
+ | cosine_recall@10 | 1.0 |
556
+ | cosine_ndcg@10 | 0.8267 |
557
+ | cosine_mrr@10 | 0.7685 |
558
+ | **cosine_map@100** | **0.7685** |
559
+
560
+ <!--
561
+ ## Bias, Risks and Limitations
562
+
563
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
564
+ -->
565
+
566
+ <!--
567
+ ### Recommendations
568
+
569
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
570
+ -->
571
+
572
+ ## Training Details
573
+
574
+ ### Training Dataset
575
+
576
+ #### Unnamed Dataset
577
+
578
+
579
+ * Size: 160 training samples
580
+ * Columns: <code>positive</code> and <code>anchor</code>
581
+ * Approximate statistics based on the first 1000 samples:
582
+ | | positive | anchor |
583
+ |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
584
+ | type | string | string |
585
+ | details | <ul><li>min: 18 tokens</li><li>mean: 93.95 tokens</li><li>max: 381 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 20.32 tokens</li><li>max: 34 tokens</li></ul> |
586
+ * Samples:
587
+ | positive | anchor |
588
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|
589
+ | <code>Priya Softweb's HR Manual provides valuable insights into the company's culture and values. Key takeaways include: * **Structure and Transparency:** The company emphasizes a structured and transparent approach to its HR processes. This is evident in its clear policies for recruitment, performance appraisals, compensation, leave management, work-from-home arrangements, and incident reporting. * **Professionalism and Ethics:** Priya Softweb places a high value on professionalism and ethical conduct. Its dress code, guidelines for mobile phone usage, and strict policies against tobacco use within the office all point toward a commitment to maintaining a professional and respectful work environment. * **Employee Well-being:** The company demonstrates a genuine concern for the well-being of its employees. This is reflected in its comprehensive leave policies, flexible work-from-home arrangements, and efforts to promote a healthy and clean workspace. * **Diversity and Inclusion:** Priya Softweb is committed to fostering a diverse and inclusive workplace, where employees from all backgrounds feel valued and respected. Its DEI policy outlines the company's commitment to equal opportunities, diverse hiring practices, and inclusive benefits and policies. * **Continuous Learning and Development:** The company encourages a culture of continuous learning and development, providing opportunities for employees to expand their skillsets and stay current with industry advancements. This is evident in its policies for Ethics & Compliance training and its encouragement of utilizing idle time for self-learning and exploring new technologies. Overall, Priya Softweb's HR Manual reveals a company culture that prioritizes structure, transparency, professionalism, employee well-being, diversity, and a commitment to continuous improvement. The company strives to create a supportive and growth-oriented work environment where employees feel valued and empowered to succeed.</code> | <code>What are the key takeaways from Priya Softweb's HR Manual regarding the company's culture and values?</code> |
590
+ | <code>Priya Softweb provides allocated basement parking facilities for employees to park their two-wheelers and four-wheelers. However, parking on the ground floor, around the lawn or main premises, is strictly prohibited as this space is reserved for Directors. Employees should use the parking under wings 5 and 6, while other parking spaces are allocated to different wings. Parking two-wheelers in the car parking zone is not permitted, even if space is available. Two-wheelers should be parked in the designated basement space on the main stand, not on the side stand. Employees are encouraged to park in common spaces on a first-come, first-served basis. The company clarifies that it is not responsible for providing parking and that employees park their vehicles at their own risk. This comprehensive parking policy ensures organized parking arrangements and clarifies the company's liability regarding vehicle safety.</code> | <code>What are the parking arrangements at Priya Softweb?</code> |
591
+ | <code>Investments and declarations must be submitted on or before the 25th of each month through OMS at Priya Softweb.</code> | <code>What is the deadline for submitting investments and declarations at Priya Softweb?</code> |
592
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
593
+ ```json
594
+ {
595
+ "loss": "MultipleNegativesRankingLoss",
596
+ "matryoshka_dims": [
597
+ 768,
598
+ 512,
599
+ 256,
600
+ 128,
601
+ 64
602
+ ],
603
+ "matryoshka_weights": [
604
+ 1,
605
+ 1,
606
+ 1,
607
+ 1,
608
+ 1
609
+ ],
610
+ "n_dims_per_step": -1
611
+ }
612
+ ```
613
+
614
+ ### Training Hyperparameters
615
+ #### Non-Default Hyperparameters
616
+
617
+ - `eval_strategy`: epoch
618
+ - `per_device_train_batch_size`: 32
619
+ - `per_device_eval_batch_size`: 16
620
+ - `gradient_accumulation_steps`: 16
621
+ - `learning_rate`: 2e-05
622
+ - `num_train_epochs`: 10
623
+ - `lr_scheduler_type`: cosine
624
+ - `warmup_ratio`: 0.1
625
+ - `load_best_model_at_end`: True
626
+ - `optim`: adamw_torch_fused
627
+
628
+ #### All Hyperparameters
629
+ <details><summary>Click to expand</summary>
630
+
631
+ - `overwrite_output_dir`: False
632
+ - `do_predict`: False
633
+ - `eval_strategy`: epoch
634
+ - `prediction_loss_only`: True
635
+ - `per_device_train_batch_size`: 32
636
+ - `per_device_eval_batch_size`: 16
637
+ - `per_gpu_train_batch_size`: None
638
+ - `per_gpu_eval_batch_size`: None
639
+ - `gradient_accumulation_steps`: 16
640
+ - `eval_accumulation_steps`: None
641
+ - `learning_rate`: 2e-05
642
+ - `weight_decay`: 0.0
643
+ - `adam_beta1`: 0.9
644
+ - `adam_beta2`: 0.999
645
+ - `adam_epsilon`: 1e-08
646
+ - `max_grad_norm`: 1.0
647
+ - `num_train_epochs`: 10
648
+ - `max_steps`: -1
649
+ - `lr_scheduler_type`: cosine
650
+ - `lr_scheduler_kwargs`: {}
651
+ - `warmup_ratio`: 0.1
652
+ - `warmup_steps`: 0
653
+ - `log_level`: passive
654
+ - `log_level_replica`: warning
655
+ - `log_on_each_node`: True
656
+ - `logging_nan_inf_filter`: True
657
+ - `save_safetensors`: True
658
+ - `save_on_each_node`: False
659
+ - `save_only_model`: False
660
+ - `restore_callback_states_from_checkpoint`: False
661
+ - `no_cuda`: False
662
+ - `use_cpu`: False
663
+ - `use_mps_device`: False
664
+ - `seed`: 42
665
+ - `data_seed`: None
666
+ - `jit_mode_eval`: False
667
+ - `use_ipex`: False
668
+ - `bf16`: False
669
+ - `fp16`: False
670
+ - `fp16_opt_level`: O1
671
+ - `half_precision_backend`: auto
672
+ - `bf16_full_eval`: False
673
+ - `fp16_full_eval`: False
674
+ - `tf32`: None
675
+ - `local_rank`: 0
676
+ - `ddp_backend`: None
677
+ - `tpu_num_cores`: None
678
+ - `tpu_metrics_debug`: False
679
+ - `debug`: []
680
+ - `dataloader_drop_last`: False
681
+ - `dataloader_num_workers`: 0
682
+ - `dataloader_prefetch_factor`: None
683
+ - `past_index`: -1
684
+ - `disable_tqdm`: False
685
+ - `remove_unused_columns`: True
686
+ - `label_names`: None
687
+ - `load_best_model_at_end`: True
688
+ - `ignore_data_skip`: False
689
+ - `fsdp`: []
690
+ - `fsdp_min_num_params`: 0
691
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
692
+ - `fsdp_transformer_layer_cls_to_wrap`: None
693
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
694
+ - `deepspeed`: None
695
+ - `label_smoothing_factor`: 0.0
696
+ - `optim`: adamw_torch_fused
697
+ - `optim_args`: None
698
+ - `adafactor`: False
699
+ - `group_by_length`: False
700
+ - `length_column_name`: length
701
+ - `ddp_find_unused_parameters`: None
702
+ - `ddp_bucket_cap_mb`: None
703
+ - `ddp_broadcast_buffers`: False
704
+ - `dataloader_pin_memory`: True
705
+ - `dataloader_persistent_workers`: False
706
+ - `skip_memory_metrics`: True
707
+ - `use_legacy_prediction_loop`: False
708
+ - `push_to_hub`: False
709
+ - `resume_from_checkpoint`: None
710
+ - `hub_model_id`: None
711
+ - `hub_strategy`: every_save
712
+ - `hub_private_repo`: False
713
+ - `hub_always_push`: False
714
+ - `gradient_checkpointing`: False
715
+ - `gradient_checkpointing_kwargs`: None
716
+ - `include_inputs_for_metrics`: False
717
+ - `eval_do_concat_batches`: True
718
+ - `fp16_backend`: auto
719
+ - `push_to_hub_model_id`: None
720
+ - `push_to_hub_organization`: None
721
+ - `mp_parameters`:
722
+ - `auto_find_batch_size`: False
723
+ - `full_determinism`: False
724
+ - `torchdynamo`: None
725
+ - `ray_scope`: last
726
+ - `ddp_timeout`: 1800
727
+ - `torch_compile`: False
728
+ - `torch_compile_backend`: None
729
+ - `torch_compile_mode`: None
730
+ - `dispatch_batches`: None
731
+ - `split_batches`: None
732
+ - `include_tokens_per_second`: False
733
+ - `include_num_input_tokens_seen`: False
734
+ - `neftune_noise_alpha`: None
735
+ - `optim_target_modules`: None
736
+ - `batch_eval_metrics`: False
737
+ - `batch_sampler`: batch_sampler
738
+ - `multi_dataset_batch_sampler`: proportional
739
+
740
+ </details>
741
+
742
+ ### Training Logs
743
+ | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
744
+ |:-------:|:-----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
745
+ | 0 | 0 | - | 0.5729 | 0.5863 | 0.6595 | 0.5079 | 0.6896 |
746
+ | 1.0 | 1 | - | 0.6636 | 0.6914 | 0.8213 | 0.6036 | 0.8472 |
747
+ | 2.0 | 2 | - | 0.7833 | 0.8148 | 0.9352 | 0.7171 | 0.8796 |
748
+ | 3.0 | 3 | - | 0.8213 | 0.8519 | 0.8981 | 0.7333 | 0.8981 |
749
+ | 4.0 | 5 | - | 0.8426 | 0.9074 | 0.8981 | 0.75 | 0.8981 |
750
+ | 5.0 | 6 | - | 0.8426 | 0.9074 | 0.8981 | 0.7685 | 0.8981 |
751
+ | **6.0** | **7** | **-** | **0.8796** | **0.9074** | **0.8981** | **0.7685** | **0.8981** |
752
+ | 7.0 | 9 | - | 0.8796 | 0.9074 | 0.8981 | 0.7685 | 0.8981 |
753
+ | 8.0 | 10 | 0.5275 | 0.8796 | 0.9074 | 0.8981 | 0.7685 | 0.8981 |
754
+
755
+ * The bold row denotes the saved checkpoint.
756
+
757
+ ### Framework Versions
758
+ - Python: 3.10.12
759
+ - Sentence Transformers: 3.0.1
760
+ - Transformers: 4.41.2
761
+ - PyTorch: 2.1.2+cu121
762
+ - Accelerate: 0.31.0
763
+ - Datasets: 2.19.1
764
+ - Tokenizers: 0.19.1
765
+
766
+ ## Citation
767
+
768
+ ### BibTeX
769
+
770
+ #### Sentence Transformers
771
+ ```bibtex
772
+ @inproceedings{reimers-2019-sentence-bert,
773
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
774
+ author = "Reimers, Nils and Gurevych, Iryna",
775
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
776
+ month = "11",
777
+ year = "2019",
778
+ publisher = "Association for Computational Linguistics",
779
+ url = "https://arxiv.org/abs/1908.10084",
780
+ }
781
+ ```
782
+
783
+ #### MatryoshkaLoss
784
+ ```bibtex
785
+ @misc{kusupati2024matryoshka,
786
+ title={Matryoshka Representation Learning},
787
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
788
+ year={2024},
789
+ eprint={2205.13147},
790
+ archivePrefix={arXiv},
791
+ primaryClass={cs.LG}
792
+ }
793
+ ```
794
+
795
+ #### MultipleNegativesRankingLoss
796
+ ```bibtex
797
+ @misc{henderson2017efficient,
798
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
799
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
800
+ year={2017},
801
+ eprint={1705.00652},
802
+ archivePrefix={arXiv},
803
+ primaryClass={cs.CL}
804
+ }
805
+ ```
806
+
807
+ <!--
808
+ ## Glossary
809
+
810
+ *Clearly define terms in order to be accessible across audiences.*
811
+ -->
812
+
813
+ <!--
814
+ ## Model Card Authors
815
+
816
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
817
+ -->
818
+
819
+ <!--
820
+ ## Model Card Contact
821
+
822
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
823
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da33d6e2ceeaaf94662759c7e8158ac1a18cbba4140f4c8ec7abc23ece99897f
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff