Creating Sketch Art from Images: Stable Diffusion & ControlNet

Updated
Creating Sketch Art from Images: Stable Diffusion & ControlNet
Hello everybody! Did you know that you can easily convert an image into sketch/line art using Stable Diffusion? In this guide, we will walk you through the process step by step. So let's get started!

1. Introduction

Welcome to our guide on transforming images into stunning sketch art! In this step-by-step tutorial, we will walk you through the process of converting your images into captivating sketch art using stable diffusion techniques. Whether you're an aspiring artist, a digital enthusiast, or simply curious about the art of line drawing, this guide will equip you with the knowledge and techniques to create visually striking line art masterpieces. So grab your favorite images and let's dive into the exciting world of transforming images into line art!

2. Requirements & Downloads (ControlNet & Model)

To transform your images in to sketches/line art you need to make sure you have the following installed before you can proceed to generate amazing art:

Once you have installed ControlNet and the right model we can start the process of transforming your images in to amazing AI art!

For those who haven't installed ControlNet yet, a detailed guide can be found below.

How to Install ControlNet Extension in Stable Diffusion (A1111)

Stable Diffusion in the Cloud⚡️Run Automatic1111 in your browser in under 90 seconds
20% bonus on first deposit

3. Step 1: Image Preparation

Before we begin, we need to prepare our base image. Generate a random image or use one of your choice as the starting point for the sketch art conversion process. We created the following image using txt2img: transforming-images-into-line-art-stable-diffusion-controlnet.png

  • Positive Prompt: photo shot on Nikon D850, 16k, (upper body portrait:1.4), portrait upper body, sharp focus, masterpiece, Hyper Detailed, breathtaking, atmospheric perspective, diffusion, pore correlation, skin imperfections, DSLR, 80mm Sigma f2, depth of field, film grain, intricate natural lighting, gorgeous 1 girl, adult (elven:0.7) woman, freckles, red eyes, light blonde layered hair, character focus, portrait, solo, upper body, looking at viewer, detailed background, detailed face, (post-apocalyptic dark dystopian theme:1.1), smile, smooth background
  • Negative Prompt: (unrealistic, render, 3d,cgi,cg,2.5d), (bad-hands-5:1.05), easynegative, [( NG_DeepNegative_V1_64T :0.9) :0.1], ng_deepnegative_v1_75t, worst quality, low quality, normal quality, child, (painting, drawing, sketch, cartoon, anime, render, 3d), blurry, deformed, disfigured, morbid, mutated, bad anatomy, bad art, (bad teeth, weird teeth, broken teeth), (weird nipples, twisted nipples, deformed nipples, flat nipples), (worst quality, low quality, logo, text, watermark, username), incomplete,
  • Model/Checkpoint: reV Animated
  • Sampling Method: DPM++ 2M SDE Karras

4. Step 2: Img2img Settings & ControlNet Settings

After generating our base image using txt2img, we can now move on to the next phase. During this stage, we will utilize the img2img tab and go through the subsequent procedures as outlined below:

img2img Settings

  • Navigate to the "img2img" tab.
  • Replace the image with a white background image and adjust the dimensions to match your original image.
  • Make sure the Denoising Strength is between 0.9-0.95

transforming-images-into-sketch-art-stable-diffusion-controlnet-nextdiffusion_7.png

Also make sure the width and height of the image in the img2img tab is exactly the same as the width and height used for the base image

ControlNet Settings

  • Scroll down and Open ControlNet.
  • Tick the boxes "Enable" & "Pixel Perfect" (Additionally you can tick the box "Low VRAM").
  • Drag your created base image into the ControlNet image box.
  • Select "Canny" in the control type section.
  • The canny preprocessor and the control_canny_xxxx model should be active.

This should something like this: transforming-images-into-line-art-stable-diffusion-controlnet_1_nextdiffusion.png

Stable Diffusion in the Cloud⚡️Run Automatic1111 in your browser in under 90 seconds

5. Step 3: Prompt Configuration

With all the necessary preparations in place, including the configuration of ControlNet settings, we can now focus on refining the prompts. The negative prompt can remain unchanged from when you initially generated the image using txt2img. However, for the positive prompt, it should accurately reflect our intended result, which is "line art/sketch art". Therefore, let's modify the prompts accordingly:

  • Positive Prompt: sketch art, line art drawing, line art, black line art, black line, black color, black lines, a line drawing, sketch drawing
  • Negative Prompt: (unrealistic, render, 3d,cgi,cg,2.5d), (bad-hands-5:1.05), easynegative, [( NG_DeepNegative_V1_64T :0.9) :0.1], ng_deepnegative_v1_75t, worst quality, low quality, normal quality, child, (painting, drawing, sketch, cartoon, anime, render, 3d), blurry, deformed, disfigured, morbid, mutated, bad anatomy, bad art, (bad teeth, weird teeth, broken teeth), (weird nipples, twisted nipples, deformed nipples, flat nipples), (worst quality, low quality, logo, text, watermark, username), incomplete,

Once the prompts have been modified, we can proceed to the final phase: generating the line art.

6. Step 4: Generating the Line/Sketch Art

Now it's time to generate the final line art. Click on the "Generate" button and let the magic happen. The result will be a stunning line art representation of your original image.

See the result below: transforming-images-into-sketch-art-stable-diffusion-controlnet-nextdiffusion_5.webp

Stable Diffusion in the Cloud⚡️Run Automatic1111 in your browser in under 90 seconds

7. Tips for Better Line Art Outputs

If you feel that the line art is not up to your desired level of perfection, you have the option to experiment with the "Canny Low Threshold" and "Canny High Threshold" values. By reducing these values, the Canny model will examine the input image more comprehensively, resulting in more visible lines. To assess the generated lines, simply click on the explosion icon located next to the preprocessor. Additionally, ensure that the "Allow preview" box is selected.

transforming-images-into-line-art-stable-diffusion-controlnet_nextdifussion_2.png

Upon clicking the explosion icon, the preview section will display the lines that have been generated by analyzing the initial image using the canny model. transforming-images-into-line-art-stable-diffusion-controlnet_3.png

  • Canny Low Threshold: 100
  • Canny High Threshold: 200

transforming-images-into-line-art-stable-diffusion-controlnet_nextdiffusion_4.png

  • Canny Low Threshold: 50
  • Canny High Threshold: 100

As you can observe, the generated lines vary when adjusting the threshold settings. Feel free to experiment with these settings until you achieve the desired preview, and only then proceed to click the "Generate" button.

8. Conclusion

In conclusion, the transformation of images into captivating line art is a fascinating process that allows us to explore new artistic possibilities and unleash our creativity. Throughout this guide, we have learned how to utilize stable diffusion techniques and the power of ControlNet to achieve remarkable results. By following the step-by-step instructions, experimenting with different settings, and exploring various styles, you have acquired the skills to transform ordinary images into extraordinary line art creations.

Frequently Asked Questions

To transform images into sketch art, you need to have the following installed:

  • ControlNet Extension
  • ControlNet Model: control_canny_fp16

The ControlNet settings include:

  • Enabling "Enable" and "Pixel Perfect" checkboxes.
  • Optionally, you can enable the "Low VRAM" checkbox.
  • Dragging the created base image into the ControlNet image box.
  • Selecting "Canny" in the control type section.
  • Activating the canny preprocessor and the control_canny_xxxx model.