In this tutorial, we delve into the exciting realm of stable diffusion and its remarkable image-to-image (img2img) function. Discover the art of transforming ordinary images into extraordinary masterpieces using Stable Diffusion techniques. Join us as we unravel the secrets behind this powerful tool, guiding you through the process of creating stunning AI art. Get ready to elevate your creative potential and witness the magic of stable diffusion in action.
To transform your images in to AI masterpieces you need to have a few things ready before you can start, here’s a list of what’s needed:
Once you have installed Stable Diffusion we can start the process of transforming your images in to amazing AI art!
When you have successfully launched Stable Diffusion go ahead and head to the img2img tab. Here you need to drag or upload your starting image in the bounding box. I’m using an image of a bird I took with my phone yesterday. Once you’ve uploaded your image to the img2img tab we need to select a checkpoint and make a few changes to the settings.
First of all you want to select your Stable Diffusion checkpoint, also known as a model. Here I will be using the revAnimated model. It's good for creating fantasy, anime and semi-realistic images. You can download this checkpoint here.
Now you want to write your prompt, here you want to describe your image as closely as possible, alternatively you can press the “Interrogate CLIP” or the “Interrogate DeepBoru” buttons on the right, although this does not always give an accurate description of your image and may take a while to load. I used a simple description for this image and added some keywords to improve the picture quality. You may notice at the end of my prompt I wrote "lora:LowRA:0.7" this is a LoRA you can learn more about how to use this in our LoRA Guide.
For the negative prompt I used a default prompt, you can copy the one i used or leave this empty. (worst quality:1.2), (low quality:1.2), (lowres:1.1), (monochrome:1.1), (greyscale), multiple views, comic, sketch, (((bad anatomy))), (((deformed))), (((disfigured))), watermark,(blurry),
Once you have written up your prompts it is time to play with the settings. Here is what you need to know:
Sampling Method: The method Stable Diffusion uses to generate your image, this has a high impact on the outcome of your image. I used DPM++ 2M SDE Karras, the step sizes Stable Diffusion uses to generate an image get smaller near the end using the Karras sampler, this improves the quality of images. You can use any sampler but for the revAnimated model i recommend DPM++ 2M SDE Karras or DPM++ 2M Karras
Sampling Steps: The number of sampling steps in Stable Diffusion determines the iterations required to transform random noise into a recognizable image based on the text prompt. Generally, higher sampling steps enhance image detail at the expense of longer processing time. Remember, it's a trade-off between detail and processing time.
Resize to: Here set your Width and Height of the generated image, most models are trained on a 512x512 or a 512x768 resolution using those resolutions result in a better image. However you are free to change this, I recommend a maximum resolution of 768x768 for a square image. You can always upscale your image following the Ultimate SD Upscale Guide for the best way to upscale your images).
CFG Scale: How strongly the image should conform to the prompt - lower values produce more creative results, i recommend a balanced setting of around 7,5.
Denoising strength: Determines how little respect the Stable DIffusion algorithm should have for the image's content. At 0, nothing will change, and at 1 you'll get an unrelated image. With values below 1.0, processing will take less steps than the Sampling Steps slider specifies. I used a value of 0,65 for this example
Seed: A value that determines the output of a random number generator - if you create an image with the same parameters and seed as another image, you'll get the same result.
Batch count: How many images to create (this has no impact on generation performance or VRAM usage). I used a batch count of 4 to generate 4 different images.
Now you are all set to create amazing AI Art, Press "Generate" to start the process!
Here is my before and after with the settings I used in Stable Diffusion with the img2img function.
The outputs are saved by default in the outputs folder followed by what type of generation you used and the date the image was created, in this example the output folder can be found here: Stable-diffusion-webui\outputs\img2img-images\2023-07-03
In summary, Stable Diffusion opens up a world of creativity, allowing us to transform images into astonishing art. By following our step-by-step guide and exploring various prompts and settings, we can generate visually striking artwork. With the right parameters and adjustments, your images can become captivating visual masterpieces. Embrace the potential of Stable Diffusion and img2img to create unforgettable experiences in visual communication. Unleash your imagination and let your creativity shine.