The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    DataFilesNotFoundError
Message:      No (supported) data files found in asigalov61/coyo-hd-11m-llavanext-all-MiniLM-L6-v2
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 72, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1904, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1270, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 597, in infer_module_for_data_files
                  raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
              datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in asigalov61/coyo-hd-11m-llavanext-all-MiniLM-L6-v2

Need help to make the dataset viewer work? Open a discussion for direct support.


Sentence transformer (all_MiniLM_L6_v2) embeddings for all long llava summaries in coyo-hd-11m-llavanext dataset (07-03-2024 version)

Sentence Transformers

coyo-hd-11m-llavanext


Instructions

PLEASE NOTE: You will need at least 40GB GPU to use the embeddings


Depencencies

!pip install huggingface_hub -U
!pip install datasets -U
!pip install sentence-transformers -U

Imports

from huggingface_hub import hf_hub_download

from datasets import load_dataset

from sentence_transformers import SentenceTransformer
from sentence_transformers import util

import torch
import numpy as np

import tqdm

coyo dataset and coyo dataset embeddings

coyo_dataset = load_dataset("CaptionEmporium/coyo-hd-11m-llavanext")

hf_hub_download(repo_id="asigalov61/coyo-hd-11m-llavanext-all-MiniLM-L6-v2", 
                repo_type='dataset', 
                filename="coyo_hd_11m_llavanext_all_MiniLM_L6_v2_llava_captions_embeddings_07_03_24.npz",
                local_dir='.'
                )

Loading code

coyo_embeddings_cpu = np.load('coyo_hd_11m_llavanext_all_MiniLM_L6_v2_llava_captions_embeddings_07_03_24.npz')['data']

coyo_embeddings_cpu = torch.from_numpy(coyo_embeddings_cpu).cuda()
coyo_embeddings_cpu = util.normalize_embeddings(coyo_embeddings_cpu)

model = SentenceTransformer('all-MiniLM-L6-v2', device='cuda')

Inference code

torch.cuda.empty_cache()

queries_corpus = ['Capital of France',
                  'Love, peace and happiness',
                  'Cute cats in tacky suits :)'
                  ]

queries_embeddings = model.encode(queries_corpus, device='cuda', show_progress_bar=True, convert_to_tensor=True)
queries_embeddings = util.normalize_embeddings(queries_embeddings)

results = util.semantic_search(queries_embeddings, coyo_embeddings_cpu, score_function=util.dot_score)

closest_index = results[0][0]['corpus_id']

print('=' * 70)
print('Best match index:', closest_index)
print('=' * 70)
print('Best match corpus entry:', coyo_dataset['train'][closest_index])
print('=' * 70)

Project Los Angeles

Tegridy Code 2024

Downloads last month
2