Transformers
Safetensors
Inference Endpoints
julien-c HF staff commited on
Commit
b3e47e2
1 Parent(s): 5b6ce85

metadata: tag repo as robotics?

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: apache-2.0
3
  datasets:
4
  - lerobot/aloha_sim_transfer_cube_human
 
5
  ---
6
  # Model Card for ACT/AlohaTransferCube
7
 
@@ -36,4 +37,4 @@ Success rate for 500 episodes (%) | 87.6 | 68.0
36
  Beta distribution lower/mean/upper (%) | 86.0 / 87.5 / 88.9 | 65.8 / 67.9 / 70.0
37
 
38
 
39
- The original code was heavily refactored, and some bugs were spotted along the way. The differences in code may account for the difference in success rate. Another possibility is that our simulation environment may use slightly different heuristics to evaluate success (we've observed that success is registered as soon as the second arm's gripper makes antipodal contact with the cube). Finally, one should observe that the in-training evaluation jumps up towards the end of training. This may need further investigation (Is it statistically significant? If so, what is the cause?).
 
2
  license: apache-2.0
3
  datasets:
4
  - lerobot/aloha_sim_transfer_cube_human
5
+ pipeline_tag: robotics
6
  ---
7
  # Model Card for ACT/AlohaTransferCube
8
 
 
37
  Beta distribution lower/mean/upper (%) | 86.0 / 87.5 / 88.9 | 65.8 / 67.9 / 70.0
38
 
39
 
40
+ The original code was heavily refactored, and some bugs were spotted along the way. The differences in code may account for the difference in success rate. Another possibility is that our simulation environment may use slightly different heuristics to evaluate success (we've observed that success is registered as soon as the second arm's gripper makes antipodal contact with the cube). Finally, one should observe that the in-training evaluation jumps up towards the end of training. This may need further investigation (Is it statistically significant? If so, what is the cause?).