Transform Videos into Any Style with AnimateDiff & IP-Adapters (A1111)

Updated
Transform Videos into Any Style with AnimateDiff & IP-Adapters (A1111)
Experience seamless video to video style changes with AnimateDiff and LCM LoRa's (A1111). Effortlessly enhance your content with innovative visual transformations powered by Stable Diffusion.

1. Introduction (Video 2 Video)

Step into the dynamic universe of video-to-video transformations with the assistance of this tutorial! Discover the enchantment of AnimatedDiff, ControlNet, IP-Adapters and LCM LoRA's as we explore the captivating world of seamless video transitions. Whether you're a video enthusiast or simply curious about the craft, join us on a journey where we decode the secrets behind video innovation. Let's collectively dive into the transformative realm of video creativity together!

2. Requirements for Styling Videos in Stable Diffusion (Video 2 Video)

Prior to embarking on the path of turning videos into stylized masterpieces, it's essential to lay down the necessary groundwork. Here, you'll discover a comprehensive list of requirements to unlock the full potential for creating extraordinary video-to-video transformations.

Requirement 1: AnimateDiff & LCM LoRA's

AnimateDiff stands out as our preferred extension, making the generation of videos or GIFs a breeze. If you haven't installed the AnimateDiff extension and the accelerating LCM LoRA's for rendering, refer to a dedicated article below for step-by-step instructions on downloading and installation:

Fast Video Generation with AnimateDiff & LCM LoRA's (A1111)

Requirement 2: ControlNet

To move forward, ensure that ControlNet is installed and updated to its most recent version. For detailed instructions on the installation process, consult our comprehensive ControlNet Installation Guide, especially if you have not installed ControlNet before.

Requirement 3: Lineart ControlNet Model

Additionally, verify that you have downloaded the appropriate ControlNet Model. We require the lineart ControlNet model, available on the official Hugging Face website.

Make sure to download the file named "control_v11p_sd15_lineart_fp16.safetensors" and position it in the following folder location: "stable-diffusion-webui/extensions/sd-web-ui-controlnet/models" within the Stable Diffusion folder.

video-to-video-with-animatediff-and-lcm-loras-a1111-lineart-controlnet-model-file-location-placement.webp

Requirement 4: IP-Adapter ControlNet Model

  • Obtain the necessary IP-adapter models for ControlNet, conveniently available on the Huggingface website.
  • Focus on using a particular IP-adapter model file named "ip-adapter-plus_sd15.safetensors" for this tutorial.
  • Download the IP adapter model.
  • Relocate the downloaded file to the designated directory: "stable-diffusion-webui > extensions > sd-webui-controlnet > models."

Requirement 5: Initial Video

Ensure you have a ready starting video for the transformative process using the innovative method assisted by AnimateDiff and LCM LoRA's, along with the specified ControlNet models. Now, let's proceed to achieve some incredible video-to-video transformations.

Stable Diffusion in the Cloud⚡️Run Automatic1111 in your browser in under 90 seconds
20% bonus on first deposit

3. Txt2Img ControlNet Settings (Stable Diffusion)

Open Stable Diffusion and go to the "txt2img" subtab. Scroll down to find the ControlNet Settings dropdown menu. Begin by selecting the first ControlNet unit (unit 0).

ControlNet Unit 0 [lineart]

video-to-video-with-animatediff-and-lcm-loras-a1111-controlnet-setting-unit0-lineart-realistic.webp

  • Do NOT Upload an Image into the "Single Image" subtab.
  • Enable the panel for the first ControlNet unit (Unit 0).
  • Select "Pixel Perfect".
  • Control Type: "Lineart"
  • Preprocessor: "lineart_realistic"
  • Model: "control_v11p_sd15_lineart"
  • Control Weight: 0.65

The remaining settings for ControlNet Unit 0 can remain in their default state. Now, let's proceed to configure the second ControlNet Unit (ControlNet Unit 1) settings.

ControlNet Unit 1 [IP-Adapter]

Let's move on to the second ControlNet settings. First, we'll provide a reference image to the canvas in ControlNet Unit 1. This image could represent a color theme, style, or even clothing. In this tutorial, we'll simply modify the video by adding a color theme or relief, enhancing its textures. The settings are outlined below:

transform-videos-into-any-style-with-animatediff-ip-adapters-a1111-ip-adapter-controlnet-settings-unit-1.webp

  • Submit an image to the "Single Image" subtab as a reference for the chosen style or color theme.
  • Enable ControlNet Unit 1.
  • Select "Pixel Perfect".
  • Control Type: "IP-Adapter"
  • Preprocessor: "ip-adapter_clip_sd15"
  • Model: "ip-adapter-plus_sd15"
  • Control Weight: 1 (Increased control weight implies greater visibility of the reference image in the final video)

Note: When adjusting the color scheme, relief, or clothing style of the final video using a reference image containing a face, keep in mind that the IP-Adapter might attempt to incorporate this facial feature into the produced video.

4. AnimateDiff Settings (Video Creation)

Let's proceed by accessing the dropdown settings menu within the AnimateDiff extension and making necessary adjustments. The specified settings are outlined below:

transform-videos-into-any-style-with-animatediff-ip-adapters-a1111-animatediff-settings-video-source-transformation-style.webp

  • Select the motion module named "mm_sd_v15_v2.ckpt."
  • Set the save format to "MP4" (You can choose to save the final result in a different format, such as GIF or WEBM)
  • Enable the AnimateDiff extension.
  • Within the "Video source" subtab, upload the initial video you want to transform.
  • Keep the remaining settings at their default state.
  • For a more thorough understanding of the AnimateDiff extensions, it is advisable to explore the official AnimateDiff GitHub page.

Note: The total number of frames and the frames per second (FPS) will be automatically configured after adding the initial video.

Stable Diffusion in the Cloud⚡️Run Automatic1111 in your browser in under 90 seconds

5. Txt2Img Settings (LCM LoRA)

Scroll up, choose your checkpoint, realistic or cartoon, it's your show. For this tutorial, we're sticking to an elegant touch of realism, but feel free to let your creativity loose and try out various checkpoint vibes! The following settings are utilized to generate the ultimate video animation:

transform-videos-into-any-style-with-animatediff-ip-adapters-a1111-txt2img-settings-stable-diffusion-video2video.webp

  • Checkpoint: Realistic Vision
  • Sampling Method: LCM
  • Sampling Steps: 8
  • Width & Height: 408 x 720 (9:16 Ratio)
  • CFG Scale: 2
  • Seed: -1

We added LCM LoRA to make rendering faster. Instead of adding extra keywords to our positive prompt, we use image prompting with our reference image in ControlNet, along with the IP-Adapter. Of course, you can choose to include additional keywords for more specific final video results. You also have the option to improve video quality by employing the upscaling method called "Hires. Fix" and utilizing the "R-ESRGAN 4x+" upscaler.

6. Examples & File Location (Video 2 Video)

After setting the stage for your video masterpiece, hit "Generate" and watch as LCM LoRA sprinkles some magic for swift video creation. Once the curtain falls, uncover your masterpiece in the covert file path: "stable-diffusion-webui\outputs\txt2img-images\AnimateDiff," complete with the precise date of video creation.

Our examples stayed true to simplicity, no "Hires. Fix" upscaling method was used. Of course, if you ever want to add a touch of virtual glamour, "Hires. Fix" is there to upscale the fun!

Stable Diffusion in the Cloud⚡️Run Automatic1111 in your browser in under 90 seconds

7. Upscale Methods: Styled Video

Within this section, we will display numerous examples of videos that have undergone upscaling transformations. It's crucial to emphasize that you can enhance the video output either within the txt2img settings before generating the video or, alternatively, after the video rendering process is complete by choosing to upscale it at a later stage.

Upscale Method 1: Txt2Img Upscaling (During Video Creation)

The initial choice, Option 1, involves upscaling the video during its initial rendering within the txt2img settings, utilizing the "Hires. Fix" method. Ensure that you choose the appropriate upscaler based on your preferences:

  • Realism: R-ESRGAN-4x+ or LDSR (Slower)
  • Paintings: ESRGAN_4x
  • Anime: R-ESRGAN 4x+ Anime6B

Upscale Method 2: Extras Tab (After Video Creation)

If you prefer to enhance the transformed video after its initial rendering, you can utilize the "Extras" tab. For additional details on upscaling a video within stable diffusion at no cost, you can find a dedicated article on this topic below:

How to Upscale any Video with AI for Free (Stable Diffusion)

Upscale Method 3: TopazLabs (Paid option)

Unlock unparalleled video quality with TopazLabs advanced upscaling – a premium solution that delivers stunning results. While this is a paid option, the accelerated rendering ensures swift transformations, outpacing free alternatives. Get Started and elevate your content effortlessly, enjoying faster and superior enhancements. Upgrade to Topaz Labs for a seamless blend of speed and exceptional video quality that sets your content apart. Your videos deserve the premium treatment!

Examples of the Upscaled Styled Vidoes

Presenting some upscaled videos. Focus, particularly on the faces in the videos; the enhancement is clearly evident compared to the non-upscaled versions. The examples have been upscaled twice, from 408x720 to 816x1440. The decision to upscale further than twice is entirely up to you.

8. Troubleshooting

When facing fuzzy or irrelevant outputs while using Animatediff alongside Controlnet, the problem likely stems from the version of Controlnet being incompatible. To resolve this, ensure you revert to a Controlnet version that functions well with Animatediff by following these steps:

  • Navigate to the "extensions/sd-webui-controlnet" directory and access the terminal by typing "cmd" into the file location text field located in the top bar.

controlnet-cmd-file-location.webp

  • Execute the following command in the terminal: git checkout -b new_branch 10bd9b25f62deab9acb256301bbf3363c42645e7

controlnet-version-revert-compatible-with-animatediff-extension.webp

  • Next, execute the following command in the terminal: git pull

controlnet-git-pull-controlnet-version-compatible-with-animatediff.webp

  • Close the Terminal and Restart Stable Diffusion for the changes to take effect.

When using the appropriate version of Controlnet that is compatible with the Animatediff extension, this workflow should function correctly.

Stable Diffusion in the Cloud⚡️Run Automatic1111 in your browser in under 90 seconds

9. Conclusion

To sum up, this tutorial has equipped you with the tools to elevate your videos from ordinary to extraordinary, employing the sophisticated techniques of AnimateDiff, ControlNet, and IP-Adapters, all propelled by the rapid rendering capabilities of LCM LoRA's. As you venture into the realm of video-to-video transformations, exploring diverse checkpoints, styles, and video content, may your creative endeavors bring depth and significance to the world of visual storytelling. Happy transforming!

Frequently Asked Questions

Absolutely! The tutorial encourages users to explore various checkpoints, styles, and video content to personalize their creations. With AnimateDiff, ControlNet, and IP-Adapters, along with LCM LoRA's, the possibilities for video transformations are diverse and limited only by your creativity.

LCM LoRA significantly speeds up the rendering process, making video generation faster and more efficient. While not mandatory, its inclusion enhances the overall experience, providing a smoother workflow and quicker results in your video-to-video transformations.

character creation tool card

AI Image Generation Tool

Create Your AI Images in Seconds!

News & Updates