not-lain's picture
Update README.md
12494be verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: url
      dtype: string
    - name: title
      dtype: string
    - name: text
      dtype: string
    - name: embedding
      sequence: float32
  splits:
    - name: train
      num_bytes: 73850973
      num_examples: 3001
  download_size: 49787145
  dataset_size: 73850973
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: gfdl
task_categories:
  - text-generation
  - fill-mask
language:
  - en
size_categories:
  - 1K<n<10K

this is a subset of the wikimedia/wikipedia dataset code for creating this dataset :

from datasets import load_dataset, Dataset
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1")

# load dataset in streaming mode (no download and it's fast)
dataset = load_dataset(
    "wikimedia/wikipedia", "20231101.en", split="train", streaming=True
)

# select 3000 samples
from tqdm import tqdm
data = Dataset.from_dict({})
for i, entry in enumerate(dataset):
    # each entry has the following columns
    # ['id', 'url', 'title', 'text']
    data = data.add_item(entry)
    if i == 3000:
        break
# free memory
del dataset

# embed the dataset
def embed(row):
  return {"embedding" : model.encode(row["text"])}
data = data.map(embed)

# push to hub
data.push_to_hub("not-lain/wikipedia-small-3000-embedded")