Edit model card

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sartifyllc/swahili-all-MiniLM-L6-v2-nli-matryoshka")
# Run inference
sentences = [
    'Mwanamume aliyevalia koti la bluu la kuzuia upepo, amelala uso chini kwenye benchi ya bustani, akiwa na chupa ya pombe iliyofungwa kwenye mojawapo ya miguu ya benchi.',
    'Mwanamume amelala uso chini kwenye benchi ya bustani.',
    'Mwanamume fulani anacheza dansi kwenye klabu hiyo akifungua chupa.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.6943
spearman_cosine 0.6856
pearson_manhattan 0.6885
spearman_manhattan 0.6872
pearson_euclidean 0.6915
spearman_euclidean 0.6906
pearson_dot 0.6799
spearman_dot 0.6677
pearson_max 0.6943
spearman_max 0.6906

Semantic Similarity

Metric Value
pearson_cosine 0.6892
spearman_cosine 0.6814
pearson_manhattan 0.6968
spearman_manhattan 0.692
pearson_euclidean 0.7001
spearman_euclidean 0.696
pearson_dot 0.6365
spearman_dot 0.619
pearson_max 0.7001
spearman_max 0.696

Semantic Similarity

Metric Value
pearson_cosine 0.6782
spearman_cosine 0.6755
pearson_manhattan 0.6962
spearman_manhattan 0.6891
pearson_euclidean 0.6996
spearman_euclidean 0.6938
pearson_dot 0.5812
spearman_dot 0.5607
pearson_max 0.6996
spearman_max 0.6938

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss sts-test-128_spearman_cosine sts-test-256_spearman_cosine sts-test-64_spearman_cosine
0.0229 100 12.9498 - - -
0.0459 200 9.9003 - - -
0.0688 300 8.6333 - - -
0.0918 400 8.0124 - - -
0.1147 500 7.2322 - - -
0.1376 600 6.936 - - -
0.1606 700 7.2855 - - -
0.1835 800 6.5985 - - -
0.2065 900 6.4369 - - -
0.2294 1000 6.2767 - - -
0.2524 1100 6.4011 - - -
0.2753 1200 6.1288 - - -
0.2982 1300 6.1466 - - -
0.3212 1400 5.9279 - - -
0.3441 1500 5.8959 - - -
0.3671 1600 5.5911 - - -
0.3900 1700 5.5258 - - -
0.4129 1800 5.5835 - - -
0.4359 1900 5.4701 - - -
0.4588 2000 5.3888 - - -
0.4818 2100 5.4474 - - -
0.5047 2200 5.1465 - - -
0.5276 2300 5.28 - - -
0.5506 2400 5.4184 - - -
0.5735 2500 5.3811 - - -
0.5965 2600 5.2171 - - -
0.6194 2700 5.3212 - - -
0.6423 2800 5.2493 - - -
0.6653 2900 5.459 - - -
0.6882 3000 5.068 - - -
0.7112 3100 5.1415 - - -
0.7341 3200 5.0764 - - -
0.7571 3300 6.1606 - - -
0.7800 3400 6.1028 - - -
0.8029 3500 5.7441 - - -
0.8259 3600 5.7148 - - -
0.8488 3700 5.4799 - - -
0.8718 3800 5.4396 - - -
0.8947 3900 5.3519 - - -
0.9176 4000 5.2394 - - -
0.9406 4100 5.2311 - - -
0.9635 4200 5.3486 - - -
0.9865 4300 5.215 - - -
1.0 4359 - 0.6814 0.6856 0.6755

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.0.1
  • Transformers: 4.40.1
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.29.3
  • Datasets: 2.19.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
2
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from

Evaluation results