The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

Screen Point-and-Read Benchmark

This dataset is used to evaluate our ToL agent on the Screen Point-and-Read (ScreenPR) task, which covers 650 screenshots from different GUI domains. Inside, mobile and web screenshots are sampled from Screenspot, screenshot from operating systems are captured from OSWorld. We didn't include the human annotation for the initially released version.

Citation

If you find it useful for your research and applications, please cite using this BibTeX as well as original data source:

@misc{fan2024readpointedlayoutawaregui,
      title={Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding}, 
      author={Yue Fan and Lei Ding and Ching-Chen Kuo and Shan Jiang and Yang Zhao and Xinze Guan and Jie Yang and Yi Zhang and Xin Eric Wang},
      year={2024},
      eprint={2406.19263},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.19263}, 
}

@article{cheng2024seeclick,
  title={Seeclick: Harnessing gui grounding for advanced visual gui agents},
  author={Cheng, Kanzhi and Sun, Qiushi and Chu, Yougang and Xu, Fangzhi and Li, Yantao and Zhang, Jianbing and Wu, Zhiyong},
  journal={arXiv preprint arXiv:2401.10935},
  year={2024}
}

@article{xie2024osworld,
  title={Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments},
  author={Xie, Tianbao and Zhang, Danyang and Chen, Jixuan and Li, Xiaochuan and Zhao, Siheng and Cao, Ruisheng and Hua, Toh Jing and Cheng, Zhoujun and Shin, Dongchan and Lei, Fangyu and others},
  journal={arXiv preprint arXiv:2404.07972},
  year={2024}
}
Downloads last month
0