Edit model card

SentenceTransformer based on BAAI/bge-small-en

This is a sentence-transformers model finetuned from BAAI/bge-small-en. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the πŸ€— Hub
model = SentenceTransformer("Areeb-02/bge-small-en-MultiplrRankingLoss-30-Rag-paper-dataset")
# Run inference
sentences = [
    'Compare the top-5 retrieval accuracy of BM25 + MQ and SERM + BF for the NQ Dataset and HotpotQA.',
    "For the NQ Dataset, SERM + BF has a top-5 retrieval accuracy of 88.22, which is significantly higher than BM25 + MQ's accuracy of 25.19. For HotpotQA, SERM + BF was not tested, but BM25 + MQ has a top-5 retrieval accuracy of 49.52.",
    'The proof for Equation 5 progresses from Equation 20 to Equation 22 by applying the transformation motivated by Xie et al. [2021] and introducing the term p(R, x1:iβˆ’1|z) to the equation.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.0178
cosine_accuracy@3 0.0436
cosine_accuracy@5 0.0653
cosine_accuracy@10 0.1248
cosine_precision@1 0.0178
cosine_precision@3 0.0158
cosine_precision@5 0.016
cosine_precision@10 0.0158
cosine_recall@1 0.0
cosine_recall@3 0.0
cosine_recall@5 0.0001
cosine_recall@10 0.0002
cosine_ndcg@10 0.0163
cosine_mrr@10 0.0423
cosine_map@100 0.0019
dot_accuracy@1 0.0178
dot_accuracy@3 0.0436
dot_accuracy@5 0.0653
dot_accuracy@10 0.1248
dot_precision@1 0.0178
dot_precision@3 0.0158
dot_precision@5 0.016
dot_precision@10 0.0158
dot_recall@1 0.0
dot_recall@3 0.0
dot_recall@5 0.0001
dot_recall@10 0.0002
dot_ndcg@10 0.0163
dot_mrr@10 0.0423
dot_map@100 0.0019

Information Retrieval

Metric Value
cosine_accuracy@1 0.0198
cosine_accuracy@3 0.0406
cosine_accuracy@5 0.0653
cosine_accuracy@10 0.1267
cosine_precision@1 0.0198
cosine_precision@3 0.0149
cosine_precision@5 0.0149
cosine_precision@10 0.0168
cosine_recall@1 0.0
cosine_recall@3 0.0
cosine_recall@5 0.0001
cosine_recall@10 0.0002
cosine_ndcg@10 0.0168
cosine_mrr@10 0.0425
cosine_map@100 0.0021
dot_accuracy@1 0.0198
dot_accuracy@3 0.0406
dot_accuracy@5 0.0653
dot_accuracy@10 0.1267
dot_precision@1 0.0198
dot_precision@3 0.0149
dot_precision@5 0.0149
dot_precision@10 0.0168
dot_recall@1 0.0
dot_recall@3 0.0
dot_recall@5 0.0001
dot_recall@10 0.0002
dot_ndcg@10 0.0168
dot_mrr@10 0.0425
dot_map@100 0.0021

Information Retrieval

Metric Value
cosine_accuracy@1 0.0188
cosine_accuracy@3 0.0376
cosine_accuracy@5 0.0644
cosine_accuracy@10 0.1307
cosine_precision@1 0.0188
cosine_precision@3 0.0139
cosine_precision@5 0.0158
cosine_precision@10 0.0172
cosine_recall@1 0.0
cosine_recall@3 0.0
cosine_recall@5 0.0001
cosine_recall@10 0.0002
cosine_ndcg@10 0.017
cosine_mrr@10 0.0419
cosine_map@100 0.0023
dot_accuracy@1 0.0188
dot_accuracy@3 0.0376
dot_accuracy@5 0.0644
dot_accuracy@10 0.1307
dot_precision@1 0.0188
dot_precision@3 0.0139
dot_precision@5 0.0158
dot_precision@10 0.0172
dot_recall@1 0.0
dot_recall@3 0.0
dot_recall@5 0.0001
dot_recall@10 0.0002
dot_ndcg@10 0.017
dot_mrr@10 0.0419
dot_map@100 0.0023

Information Retrieval

Metric Value
cosine_accuracy@1 0.0188
cosine_accuracy@3 0.0366
cosine_accuracy@5 0.0644
cosine_accuracy@10 0.1307
cosine_precision@1 0.0188
cosine_precision@3 0.0135
cosine_precision@5 0.0156
cosine_precision@10 0.0172
cosine_recall@1 0.0
cosine_recall@3 0.0
cosine_recall@5 0.0001
cosine_recall@10 0.0002
cosine_ndcg@10 0.017
cosine_mrr@10 0.0418
cosine_map@100 0.0022
dot_accuracy@1 0.0188
dot_accuracy@3 0.0366
dot_accuracy@5 0.0644
dot_accuracy@10 0.1307
dot_precision@1 0.0188
dot_precision@3 0.0135
dot_precision@5 0.0156
dot_precision@10 0.0172
dot_recall@1 0.0
dot_recall@3 0.0
dot_recall@5 0.0001
dot_recall@10 0.0002
dot_ndcg@10 0.017
dot_mrr@10 0.0418
dot_map@100 0.0022

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,010 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 2 tokens
    • mean: 21.28 tokens
    • max: 59 tokens
    • min: 2 tokens
    • mean: 40.15 tokens
    • max: 129 tokens
  • Samples:
    anchor positive
    What is the purpose of the MultiHop-RAG dataset and what does it consist of? The MultiHop-RAG dataset is developed to benchmark Retrieval-Augmented Generation (RAG) for multi-hop queries. It consists of a knowledge base, a large collection of multi-hop queries, their ground-truth answers, and the associated supporting evidence. The dataset is built using an English news article dataset as the underlying RAG knowledge base.
    Among Google, Apple, and Nvidia, which company reported the largest profit margins in their third-quarter reports for the fiscal year 2023? Apple reported the largest profit margins in their third-quarter reports for the fiscal year 2023.
    Under what circumstances should the LLM answer the questions? The LLM should answer the questions based solely on the information provided in the paragraphs, and it should not use any other information.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss cosine_map@100
0 0 - 0.0018
1.5625 100 - 0.0019
3.0 192 - 0.0020
1.5625 100 - 0.0021
3.125 200 - 0.0020
4.6875 300 - 0.0021
5.0 320 - 0.0020
1.5625 100 - 0.0020
3.125 200 - 0.0021
4.6875 300 - 0.0022
1.5625 100 - 0.0021
3.125 200 - 0.0019
4.6875 300 - 0.0022
6.25 400 - 0.0022
7.8125 500 0.0021 0.0022
9.375 600 - 0.0023
10.0 640 - 0.0022

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.3
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
33.4M params
Tensor type
F32
Β·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results