Ubuntu commited on
Commit
03fc49e
β€’
1 Parent(s): 23d56ee

fix README

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: CLIP-Kinetics700
13
+ size_categories:
14
+ - 100K<n<1M
15
+ task_categories:
16
+ - feature-extraction
17
+ - zero-shot-classification
18
+ ---
19
+
20
+ # Dataset Card for CLIP-Kinetics70
21
+
22
+ ## Table of Contents
23
+ - [Table of Contents](#table-of-contents)
24
+ - [Dataset Description](#dataset-description)
25
+ - [Dataset Summary](#dataset-summary)
26
+ - [Dataset Preprocessing](#dataset-preprocessing)
27
+ - [Dataset Structure](#dataset-structure)
28
+ - [Data Instances](#data-instances)
29
+ - [Data Fields](#data-fields)
30
+ - [Data Splits](#data-splits)
31
+ - [Dataset Creation](#dataset-creation)
32
+ - [Source Data](#source-data)
33
+ - [Simple Experiments](#dataset-creation)
34
+ - [Zero-shot Evaluation](#zero-shot)
35
+ - [Linear-probe Evaluation](#zero-shot)
36
+
37
+ ## Dataset Description
38
+ ### Dataset Summary
39
+
40
+ CLIP-Kinetics700 is a compressed version of the Kinetics700 dataset using OpenAI's CLIP model.
41
+
42
+ The original dataset is ~700 GB making it difficult to use and hold in memory on one machine. By downsampling each video to 1 FPS and encoding the frames using CLIP we we're able to compress the dataset to ~8 GB making it very memory-friendly and easy to use.
43
+
44
+ ### Dataset Preprocessing
45
+
46
+ [clip-video-encode](https://github.com/iejMac/clip-video-encode) is a tool you can use to easily and efficiently compute CLIP embeddings from video frames. We used it to generate the embeddings for this dataset.
47
+
48
+ ## Dataset Structure
49
+
50
+ ### Data Format
51
+
52
+ We formatted this as a [WebDataset](https://github.com/webdataset/webdataset) for better data-loading performance when training the models.
53
+ Each split contains a list of tar files each with 10000 data samples. This format can be read and used easily using the EmbeddingWebDatasetReader from [clip-video-encode](https://github.com/iejMac/clip-video-encode).
54
+
55
+ ```
56
+ CLIP-Kinetics700
57
+ β”œβ”€β”€ splits.csv
58
+ β”œβ”€β”€ ds_00000.tar
59
+ | β”œβ”€β”€ vid_00000.npy
60
+ | β”œβ”€β”€ vid_00000.txt
61
+ | β”œβ”€β”€ vid_00000.json
62
+ | β”œβ”€β”€ vid_00001.npy
63
+ | β”œβ”€β”€ vid_00001.txt
64
+ | β”œβ”€β”€ vid_00001.json
65
+ | └── ...
66
+ | β”œβ”€β”€ vid_10000.npy
67
+ | β”œβ”€β”€ vid_10000.txt
68
+ | β”œβ”€β”€ vid_10000.json
69
+ β”œβ”€β”€ ds_00001.tar
70
+ | β”œβ”€β”€ vid_10001.npy
71
+ | β”œβ”€β”€ vid_10001.txt
72
+ | β”œβ”€β”€ vid_10001.json
73
+ β”‚ ...
74
+ ...
75
+ ```
76
+
77
+
78
+ ### Data Fields
79
+ * vid.npy: the numpy array with the per-frame embeddings. Shape -> (n_frames, 512)
80
+ * vid.cap: the "caption" of the video. In this case it is the Kinetics700 label.
81
+ * vid.json: additional metadata - YouTube video ID, start time, end time.
82
+
83
+ ### Data Splits
84
+ * Train - 536489 samples | 54 tar's
85
+ * Validation - 33966 samples | 4 tar's
86
+ * Test - 64532 samples | 7 tar's
87
+
88
+ ## Dataset Creation
89
+
90
+ ### Source Data
91
+
92
+ Data was sourced from DeepMind's [Kinetics700](https://www.deepmind.com/open-source/kinetics) dataset and downloaded using [this](https://github.com/cvdfoundation/kinetics-dataset) convenient repository.
93
+
94
+ ## Simple Experiments
95
+
96
+ Using [this repository](https://github.com/LAION-AI/temporal-embedding-aggregation) we evaluate CLIP-Kinetics700 with the following simple methods:
97
+
98
+ ### [Zero-shot Evaluation](https://github.com/LAION-AI/temporal-embedding-aggregation/blob/master/src/evaluation/zero_shot.py)
99
+
100
+ | | Accuracy |
101
+ | ---------------- | -------- |
102
+ | Top-1 | 0.22 |
103
+ | Top-5 | 0.43 |
104
+ | mean(Top1, Top5) | 0.33 |
105
+
106
+ ### [Linear-probe Evaluation](https://github.com/LAION-AI/temporal-embedding-aggregation/blob/master/src/evaluation/linear_probe.py)
107
+
108
+ | | Accuracy |
109
+ | ---------------- | -------- |
110
+ | Top-1 | 0.41 |
111
+ | Top-5 | 0.65 |
112
+ | mean(Top1, Top5) | 0.53 |