Edit model card

bge-base-en-v1.5-klej-dyk

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'ile katod ma duodioda?',
    'kto nosi mantyle?',
    'w jakim celu nowożeńcom w Korei wręcza się injeolmi?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.2043
cosine_accuracy@3 0.5024
cosine_accuracy@5 0.6803
cosine_accuracy@10 0.7548
cosine_precision@1 0.2043
cosine_precision@3 0.1675
cosine_precision@5 0.1361
cosine_precision@10 0.0755
cosine_recall@1 0.2043
cosine_recall@3 0.5024
cosine_recall@5 0.6803
cosine_recall@10 0.7548
cosine_ndcg@10 0.4742
cosine_mrr@10 0.3839
cosine_map@100 0.391

Information Retrieval

Metric Value
cosine_accuracy@1 0.1947
cosine_accuracy@3 0.4928
cosine_accuracy@5 0.6635
cosine_accuracy@10 0.7548
cosine_precision@1 0.1947
cosine_precision@3 0.1643
cosine_precision@5 0.1327
cosine_precision@10 0.0755
cosine_recall@1 0.1947
cosine_recall@3 0.4928
cosine_recall@5 0.6635
cosine_recall@10 0.7548
cosine_ndcg@10 0.4648
cosine_mrr@10 0.3723
cosine_map@100 0.3783

Information Retrieval

Metric Value
cosine_accuracy@1 0.1899
cosine_accuracy@3 0.4543
cosine_accuracy@5 0.6058
cosine_accuracy@10 0.7067
cosine_precision@1 0.1899
cosine_precision@3 0.1514
cosine_precision@5 0.1212
cosine_precision@10 0.0707
cosine_recall@1 0.1899
cosine_recall@3 0.4543
cosine_recall@5 0.6058
cosine_recall@10 0.7067
cosine_ndcg@10 0.4377
cosine_mrr@10 0.3523
cosine_map@100 0.359

Information Retrieval

Metric Value
cosine_accuracy@1 0.1851
cosine_accuracy@3 0.4375
cosine_accuracy@5 0.5481
cosine_accuracy@10 0.6442
cosine_precision@1 0.1851
cosine_precision@3 0.1458
cosine_precision@5 0.1096
cosine_precision@10 0.0644
cosine_recall@1 0.1851
cosine_recall@3 0.4375
cosine_recall@5 0.5481
cosine_recall@10 0.6442
cosine_ndcg@10 0.4084
cosine_mrr@10 0.3332
cosine_map@100 0.3393

Information Retrieval

Metric Value
cosine_accuracy@1 0.1731
cosine_accuracy@3 0.3389
cosine_accuracy@5 0.4255
cosine_accuracy@10 0.5144
cosine_precision@1 0.1731
cosine_precision@3 0.113
cosine_precision@5 0.0851
cosine_precision@10 0.0514
cosine_recall@1 0.1731
cosine_recall@3 0.3389
cosine_recall@5 0.4255
cosine_recall@10 0.5144
cosine_ndcg@10 0.3337
cosine_mrr@10 0.2769
cosine_map@100 0.2853

Training Details

Training Dataset

Unnamed Dataset

  • Size: 3,738 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 89.95 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 30.73 tokens
    • max: 76 tokens
  • Samples:
    positive anchor
    Rynek Kolumna Matki Boskiej, tzw. Kolumna Maryjna wykonana w latach 1725-1727 przez Johanna Melchiora Österreicha. kto jest autorem kolumny maryjnej na raciborskim rynku?
    Chleb razowy jest ciemniejszy i zawiera większą ilość błonnika oraz składników mineralnych niż chleb biały (pytlowy, czyli wypiekany z mąki przesiewanej przez pytel), bowiem jest w nim większy udział drobin pochodzących z łupin ziarna, gdzie gromadzą się te składniki. które składniki razowego chleba odpowiadają za jego walory zdrowotne?
    Najgłębsza znana studnia krasowa to jaskinia Vrtoglavica w Słowenii o głębokości ponad 600 metrów. ile metrów głębokości mierzy studnia na podwórzu klasztoru w Czernej?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.6838 10 6.5594 - - - - -
0.9573 14 - 0.3319 0.3751 0.3955 0.2618 0.4033
1.3675 20 4.2206 - - - - -
1.9829 29 - 0.3324 0.3591 0.3807 0.2833 0.3946
2.0513 30 3.3414 - - - - -
2.7350 40 2.9757 - - - - -
2.9402 43 - 0.3375 0.3570 0.3805 0.2840 0.3905
3.4188 50 2.8884 - - - - -
3.8291 56 - 0.3393 0.359 0.3783 0.2853 0.391
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.2
  • Sentence Transformers: 3.0.0
  • Transformers: 4.41.2
  • PyTorch: 2.3.1
  • Accelerate: 0.27.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
109M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results