Datasets:
VIMA
/

ArXiv:
License:
yunfanj commited on
Commit
223e3e1
1 Parent(s): 61bda36

add dataset card

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -1,3 +1,52 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ # Dataset Card for VIMA-Data
6
+
7
+ ## Table of Contents
8
+ - [Table of Contents](#table-of-contents)
9
+ - [Dataset Description](#dataset-description)
10
+ - [Dataset Summary](#dataset-summary)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Dataset Creation](#dataset-creation)
13
+ - [Additional Information](#additional-information)
14
+ - [Licensing Information](#licensing-information)
15
+ - [Citation Information](#citation-information)
16
+
17
+ ## Dataset Description
18
+
19
+ - **Homepage:** https://vimalabs.github.io/
20
+ - **Repository:** https://github.com/vimalabs/VimaBench
21
+ - **Paper:** https://arxiv.org/abs/2210.03094
22
+
23
+ ### Dataset Summary
24
+
25
+ This is the official dataset used to train general robot manipulation agents with multimodal prompts, as presented in [paper](https://arxiv.org/abs/2210.03094). It contains 650K trajectories for 13 tasks in [VIMA-Bench](https://github.com/vimalabs/VimaBench). All demonstrations are generated by oracls.
26
+
27
+ ## Dataset Structure
28
+
29
+ Data are grouped into different tasks. Within each trajectory's folder, there are two folders `rgb_front` and `rgb_top`, and three files `obs.pkl`, `action.pkl`, and `trajectory.pkl`. RGB frames from a certain perspective are separately stored in corresponding folder. `obs.pkl` includes segmentation and state of end effector. `action.pkl` contains oracle actions. `trajectory.pkl` contains meta information such as elapsed steps, task information, and object information. Users can build their custom data piepline starting from here. More details and examples can be found [here](https://github.com/vimalabs/VimaBench#training-data).
30
+
31
+ ## Dataset Creation
32
+
33
+ All demonstrations are generated by scripted oracles.
34
+
35
+ ## Additional Information
36
+
37
+ ### Licensing Information
38
+
39
+ This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode) license.
40
+
41
+ ### Citation Information
42
+
43
+ If you find our work useful, please consider citing us!
44
+
45
+ ```bibtex
46
+ @article{jiang2022vima,
47
+ title = {VIMA: General Robot Manipulation with Multimodal Prompts},
48
+ author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan},
49
+ year = {2022},
50
+ journal = {arXiv preprint arXiv: Arxiv-2210.03094}
51
+ }
52
+ ```