gK29382231121's picture
Add new SentenceTransformer model.
45e9f65 verified
metadata
language:
  - en
license: apache-2.0
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dataset_size:1K<n<10K
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
widget:
  - source_sentence: What is the title of Item 6 in the text?
    sentences:
      - What is the title of Item 8 in the document?
      - What was the total premiums revenue for the Insurance segment in 2023?
      - >-
        How much were the net cash flows from investing activities in 2023 and
        2022?
  - source_sentence: Which markets does Garmin primarily serve?
    sentences:
      - What types of products are offered in Garmin's Fitness segment?
      - In 2023, AbbVie's net revenue in the United States was $41,883 million.
      - >-
        As of December 31, 2023, the total deferred income tax asset was
        $1,157,486.
  - source_sentence: What was the effective tax rate in 2023?
    sentences:
      - What was the effective tax rate for 2023 and how did it compare to 2022?
      - What are the various diversity, equity, and inclusion councils at AMC?
      - >-
        What is the title of the section that discusses legal issues in the
        document?
  - source_sentence: What begins on page 105 of this report?
    sentences:
      - >-
        Where can one find the details pertaining to Legal Proceedings in the
        report?
      - What are the technological features of the GeForce RTX 40 Series GPUs?
      - >-
        Changes in foreign exchange rates reduced cost of sales by $254 million
        in 2023.
  - source_sentence: How is Costco's fiscal year structured?
    sentences:
      - How many weeks did the fiscal years 2023 and 2022 include?
      - What is the process for using reinsurers not on the authorized list?
      - >-
        What contributed to the increase in Google Services' operating income in
        2023?
pipeline_tag: sentence-similarity
model-index:
  - name: BGE base Financial Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.6814285714285714
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8128571428571428
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.85
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9028571428571428
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6814285714285714
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.27095238095238094
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16999999999999996
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09028571428571427
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6814285714285714
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8128571428571428
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.85
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9028571428571428
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7916721734405803
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7562692743764173
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7609992859917654
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.6842857142857143
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8114285714285714
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8528571428571429
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8985714285714286
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6842857142857143
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2704761904761905
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17057142857142854
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08985714285714284
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6842857142857143
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8114285714285714
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8528571428571429
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8985714285714286
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7909210075399126
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.756487528344671
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.761586340523296
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.6785714285714286
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8085714285714286
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8428571428571429
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8942857142857142
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6785714285714286
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2695238095238095
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16857142857142854
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08942857142857143
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6785714285714286
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8085714285714286
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8428571428571429
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8942857142857142
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7866298497982406
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.752303287981859
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7571741668436585
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.6714285714285714
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7857142857142857
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8257142857142857
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8814285714285715
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6714285714285714
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2619047619047619
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16514285714285715
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08814285714285713
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6714285714285714
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7857142857142857
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8257142857142857
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8814285714285715
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7742856481999635
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.740471655328798
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.745692801681558
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.6371428571428571
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7685714285714286
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8071428571428572
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8614285714285714
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6371428571428571
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2561904761904762
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16142857142857142
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08614285714285713
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6371428571428571
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7685714285714286
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8071428571428572
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8614285714285714
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7500703607138253
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7145918367346937
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7198995734568113
            name: Cosine Map@100

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("gK29382231121/bge-base-financial-matryoshka")
# Run inference
sentences = [
    "How is Costco's fiscal year structured?",
    'How many weeks did the fiscal years 2023 and 2022 include?',
    'What is the process for using reinsurers not on the authorized list?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6814
cosine_accuracy@3 0.8129
cosine_accuracy@5 0.85
cosine_accuracy@10 0.9029
cosine_precision@1 0.6814
cosine_precision@3 0.271
cosine_precision@5 0.17
cosine_precision@10 0.0903
cosine_recall@1 0.6814
cosine_recall@3 0.8129
cosine_recall@5 0.85
cosine_recall@10 0.9029
cosine_ndcg@10 0.7917
cosine_mrr@10 0.7563
cosine_map@100 0.761

Information Retrieval

Metric Value
cosine_accuracy@1 0.6843
cosine_accuracy@3 0.8114
cosine_accuracy@5 0.8529
cosine_accuracy@10 0.8986
cosine_precision@1 0.6843
cosine_precision@3 0.2705
cosine_precision@5 0.1706
cosine_precision@10 0.0899
cosine_recall@1 0.6843
cosine_recall@3 0.8114
cosine_recall@5 0.8529
cosine_recall@10 0.8986
cosine_ndcg@10 0.7909
cosine_mrr@10 0.7565
cosine_map@100 0.7616

Information Retrieval

Metric Value
cosine_accuracy@1 0.6786
cosine_accuracy@3 0.8086
cosine_accuracy@5 0.8429
cosine_accuracy@10 0.8943
cosine_precision@1 0.6786
cosine_precision@3 0.2695
cosine_precision@5 0.1686
cosine_precision@10 0.0894
cosine_recall@1 0.6786
cosine_recall@3 0.8086
cosine_recall@5 0.8429
cosine_recall@10 0.8943
cosine_ndcg@10 0.7866
cosine_mrr@10 0.7523
cosine_map@100 0.7572

Information Retrieval

Metric Value
cosine_accuracy@1 0.6714
cosine_accuracy@3 0.7857
cosine_accuracy@5 0.8257
cosine_accuracy@10 0.8814
cosine_precision@1 0.6714
cosine_precision@3 0.2619
cosine_precision@5 0.1651
cosine_precision@10 0.0881
cosine_recall@1 0.6714
cosine_recall@3 0.7857
cosine_recall@5 0.8257
cosine_recall@10 0.8814
cosine_ndcg@10 0.7743
cosine_mrr@10 0.7405
cosine_map@100 0.7457

Information Retrieval

Metric Value
cosine_accuracy@1 0.6371
cosine_accuracy@3 0.7686
cosine_accuracy@5 0.8071
cosine_accuracy@10 0.8614
cosine_precision@1 0.6371
cosine_precision@3 0.2562
cosine_precision@5 0.1614
cosine_precision@10 0.0861
cosine_recall@1 0.6371
cosine_recall@3 0.7686
cosine_recall@5 0.8071
cosine_recall@10 0.8614
cosine_ndcg@10 0.7501
cosine_mrr@10 0.7146
cosine_map@100 0.7199

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 8 tokens
    • mean: 45.34 tokens
    • max: 439 tokens
    • min: 2 tokens
    • mean: 20.47 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    The HP GreenValley edge-to-cloud platform is used for software-defined disaggregated storage services that include HPE GreenLake for Block Storage and HPE GreenLake for File Storage, and it provides unified cloud-based management to simplify how customers manage storage. What are the focus areas for the HP GreenLake platform?
    Net income $
    Deferred tax assets and deferred tax liabilities included in the Consolidated Balance Sheets as follows: As of October 31, 2023: Deferred tax assets were $3,155 million and Deferred tax liabilities were $44 million. As of October 31, 2022: Deferred tax assets were $2,167 million and Deferred tax liabilities were $121 million. The total net deferred tax assets were $3,111 million in 2023 and $2,046 million in 2022. What was the change in HP's net deferred tax assets from 2022 to 2023?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.8122 10 1.5361 - - - - -
0.9746 12 - 0.7280 0.7414 0.7494 0.6896 0.7470
1.6244 20 0.6833 - - - - -
1.9492 24 - 0.7426 0.7487 0.7573 0.7138 0.7592
2.4365 30 0.4674 - - - - -
2.9239 36 - 0.7452 0.7558 0.7624 0.7190 0.7623
3.2487 40 0.4038 - - - - -
3.8985 48 - 0.7457 0.7572 0.7616 0.7199 0.7610

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.0
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.30.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}