On a more serious note: Where did you get the prompt? Did you come up with it entirely from scratch or did you start with the interrogate clip function from the original image?
On a more serious note: Where did you get the prompt? Did you come up with it entirely from scratch or did you start with the interrogate clip function from the original image?
This isn't a self upload. You can find the source link below the tags on the left and ask the artist on reddit.
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
When you mention denoising strength, is that referring to the upscale setting or something else?
It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)
It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)
Ah, thanks - I'm not even too sure what that setting does and haven't used it ๐
Ah, thanks - I'm not even too sure what that setting does and haven't used it ๐
It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).
It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).
Good to know, thanks. Seems to just be built in to the generation I suppose
bro used 80 different models to generate this masterpiece
Was experimenting with a mass 'anime styled' model mix. I noticed my prompt had a lot of weight on specific things at the time which shifted how the model responded and made me notice how it does with certain things (architecture/food etc.). I've done a few tests since with the random merge and this is one of the results ๐
@mellohi I fixed your tags, it seems like you've pasted the prompt into the tag field by accident. Please add self_upload to the tags if you generated this image. Also, you'll get better colors if you add a custom .vae (looks like you're missing one, but I might be wrong).