OABench / README.md
LiruiZhao's picture
initial commit
6e0c2bd verified
|
raw
history blame
No virus
1.57 kB
---
license: apache-2.0
task_categories:
- image-to-image
language:
- en
pretty_name: OABench
size_categories:
- 10K<n<100K
---
# Diffree's dataset: OABench (Object Addition Benchmark)
<p align="center">
<a href="https://opengvlab.github.io/Diffree/"><u>[🌐Project Page]</u></a>
&nbsp;&nbsp;&nbsp;
<a href="https://drive.google.com/file/d/1AdIPA5TK5LB1tnqqZuZ9GsJ6Zzqo2ua6/view"><u>[🎥 Video]</u></a>
&nbsp;&nbsp;&nbsp;
<a href="https://github.com/OpenGVLab/Diffree"><u>[🔍 Code]</u></a>
&nbsp;&nbsp;&nbsp;
<a href="https://arxiv.org/pdf/2407.16982"><u>[📜 Arxiv]</u></a>
&nbsp;&nbsp;&nbsp;
<a href="https://huggingface.co/spaces/LiruiZhao/Diffree"><u>[🤗 Hugging Face Demo]</u></a>
</p>
[Diffree](https://arxiv.org/pdf/2407.16982) is a diffusion model that enables the addition of new objects to images using only text descriptions, seamlessly integrating them with consistent background and spatial context.
In this repo, we provide the dataset OABench of Diffree, and you can use it for training your own model or explore and try our model via [🤗 Hugging Face demo](https://huggingface.co/spaces/LiruiZhao/Diffree).
## Citation
If you found this work useful, please consider citing:
```
@article{zhao2024diffree,
title={Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model},
author={Zhao, Lirui and Yang, Tianshuo and Shao, Wenqi and Zhang, Yuxin and Qiao, Yu and Luo, Ping and Zhang, Kaipeng and Ji, Rongrong},
journal={arXiv preprint arXiv:2407.16982},
year={2024}
}
```