File size: 1,464 Bytes
8d1b2a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63a4b0f
 
 
 
 
 
 
 
8d1b2a0
8f6e8c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12494be
 
8f6e8c8
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  - name: embedding
    sequence: float32
  splits:
  - name: train
    num_bytes: 73850973
    num_examples: 3001
  download_size: 49787145
  dataset_size: 73850973
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: gfdl
task_categories:
- text-generation
- fill-mask
language:
- en
size_categories:
- 1K<n<10K
---

this is a subset of the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset
code for creating this dataset : 

```python
from datasets import load_dataset, Dataset
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1")

# load dataset in streaming mode (no download and it's fast)
dataset = load_dataset(
    "wikimedia/wikipedia", "20231101.en", split="train", streaming=True
)

# select 3000 samples
from tqdm import tqdm
data = Dataset.from_dict({})
for i, entry in enumerate(dataset):
    # each entry has the following columns
    # ['id', 'url', 'title', 'text']
    data = data.add_item(entry)
    if i == 3000:
        break
# free memory
del dataset

# embed the dataset
def embed(row):
  return {"embedding" : model.encode(row["text"])}
data = data.map(embed)

# push to hub
data.push_to_hub("not-lain/wikipedia-small-3000-embedded")
```