Color changing issue around mask edges

#5
by imperator - opened

Hello,
looks like a bug for me...

imperator9347_Herg_illustration_of_a_astronaut_top_body_from__ee0e87a8-a551-46b6-9a1f-9407dfecb9a3_2.png
image (6).webp

Hi, long story short, I know, I'm not happy on how this inpaint works, because I also knew that it's not perfect, in extreme cases like yours (with pure painted colors) or where the background has too many intricate details, it will fail. In this case, probably can be fixed with changing the model to a good one that does illustrations/drawings (it has loaded a realistic one), so if you know a good one that has a lighting version, please let me know. Also it probably won't be noticeable if we don't paste back the original image over the generated one, which is really not needed for anime/drawings

As why I did this, a couple of days ago I published a guide and a proof of concept on "image fill" but people liked a lot the demo and asked me to allow to change the prompt, so I published another space that allows to input the prompt, which let us to this space.

Soon I'll publish more guides with a real inpainting/outpainting and with PoCs, but as always, I do guides and try to teach people how to use diffusers, I don't publish these spaces as full apps for production or without bugs.

Thanks for the feedback! This was my first attempt with that demo, and it's a common challenge in inpainting. There are typically two main issues: 1) color shifts around the mask, and 2) color changes across the entire image. Getting both aspects right is difficult, but it's something that any good inpainting implementation should handle.
As far as I know, there are only a few solid implementations for Stable Diffusion 1.5 that handle this well. For example, the older implementation based on Diffuser (https://github.com/stablecabal/gyre/tree/main) works excellently for both inpainting and outpainting in SD 1.5. However, I haven’t seen any particularly strong solutions for SDXL yet, which is why I’m curious to explore more. For instance, almost all inpainting nodes for ComfyUI still face these color issues making these unusable. So I hope that with upcoming Flux Inpainting model this will be finally solved.

No problem, but its not that hard too fix, if you see, is not that the inpainting is wrong, just that it doesn't match the colors perfect, as I told you, if you don't paste back the original image it won't happen which is what normal inpainting does but since the original goes through the VAE it will lose quality. I'll add the option for you to test in this space, there's another option which is to match the histograms, I did it in a diffusers post too.

I'll do a guide for a real quality inpainting that you can use professionally but it won't be a fast one like this space, a good inpainting needs a lot more steps in preparing the image and at inference.

I'm kind of curious about the "upcoming Flux Inpainting model", where did you hear this? is there someone training one?

sure... I would love to test such an option. Thank you.
Yes there are working on it but no ETA

I was going to work on this, but to have a before and after, I tried the same with your image, but I got it right on the first try without doing anything special

image.png

after some tests, if I add "illustration" to the prompt, I get the same effect. Nevertheless I added the paste back checkbox. If you uncheck it you won't have the difference in colors between the generation and the rest of the image but the drawback is that everything is brighter with your image and in photo realistic images you lose details.

paste back no paste back
image (11).webp image (12).webp

Now that I see the results, maybe this can be fixed also by loading the noise offset lora, if I have the time I can test it. Also I couldn't find a good illustration lighting model in the Hub, I think that probably can also fix it because if you don't try to generate "illustrations" with this model, it works.

Sign up or log in to comment