skaramcheti commited on
Commit
e5822cc
1 Parent(s): f7ad868

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -19,7 +19,7 @@ The model takes language instructions and camera images as input and generates r
19
 
20
  All OpenVLA checkpoints, as well as our [training codebase](https://github.com/openvla/openvla) are released under an MIT License.
21
 
22
- For full details, please read [our paper](https://openvla.github.io/) and see [our project page](https://openvla.github.io/).
23
 
24
  ## Model Summary
25
 
@@ -32,7 +32,7 @@ For full details, please read [our paper](https://openvla.github.io/) and see [o
32
  + **Language Model**: Llama-2
33
  - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/) -- specific component datasets can be found [here](https://github.com/openvla/openvla).
34
  - **Repository:** [https://github.com/openvla/openvla](https://github.com/openvla/openvla)
35
- - **Paper:** [OpenVLA: An Open-Source Vision-Language-Action Model](https://openvla.github.io/)
36
  - **Project Page & Videos:** [https://openvla.github.io/](https://openvla.github.io/)
37
 
38
  ## Uses
@@ -93,7 +93,7 @@ For more examples, including scripts for fine-tuning OpenVLA models on your own
93
  @article{kim24openvla,
94
  title={OpenVLA: An Open-Source Vision-Language-Action Model},
95
  author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},
96
- journal = {arXiv preprint},
97
  year={2024}
98
  }
99
  ```
 
19
 
20
  All OpenVLA checkpoints, as well as our [training codebase](https://github.com/openvla/openvla) are released under an MIT License.
21
 
22
+ For full details, please read [our paper](https://arxiv.org/abs/2406.09246) and see [our project page](https://openvla.github.io/).
23
 
24
  ## Model Summary
25
 
 
32
  + **Language Model**: Llama-2
33
  - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/) -- specific component datasets can be found [here](https://github.com/openvla/openvla).
34
  - **Repository:** [https://github.com/openvla/openvla](https://github.com/openvla/openvla)
35
+ - **Paper:** [OpenVLA: An Open-Source Vision-Language-Action Model](https://arxiv.org/abs/2406.09246)
36
  - **Project Page & Videos:** [https://openvla.github.io/](https://openvla.github.io/)
37
 
38
  ## Uses
 
93
  @article{kim24openvla,
94
  title={OpenVLA: An Open-Source Vision-Language-Action Model},
95
  author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},
96
+ journal = {arXiv preprint arXiv:2406.09246},
97
  year={2024}
98
  }
99
  ```