NEW Flux1 Kontext Dev Model for ComfyUI: Image Editing Made Easy

Table of Contents
1. Introduction
Welcome to the tutorial on the NEW Flux Kontext dev model and how to utilize it with ComfyUI! If you haven't installed ComfyUI yet, it's crucial to set up your environment correctly before diving into the exciting features of the Flux1 Kontext model.
For those looking to get started with ComfyUI, we have a dedicated guide that walks you through the installation process. You can find it here:
In this tutorial, we will cover the essential requirements for running the Flux1 Kontext model, how to set up your workflow, and provide examples of how to use this powerful tool for image editing and more. By the end of this guide, you will be equipped with the knowledge to effectively utilize the Flux Kontext model in your projects. Let's get started!
2. Requirements for Running the Flux1 Kontext Model
Before you can start using the Flux1 Kontext Dev model in ComfyUI, there are a few essential setup steps. You’ll need to download some base files that are required for image editing and image-to-image workflows.
The base models below are the same for running both the FP8 and the GGUF versions.** No matter which variant you choose, you’ll use the same VAE, CLIP, and text encoder files.
Step 1: Download Base Files (VAE and text encoder files)
Download the following base files from Hugging Face and place them in the correct file directory for ComfyUI:
File Name | Huggingface Page | File Directory |
---|---|---|
t5xxl_fp16.safetensors | Huggingface Page | ..\ComfyUI\models\clip |
clip_l.safetensors | Huggingface Page | ..\ComfyUI\models\clip |
ae.safetensors | Huggingface Page | ...\ComfyUI\models\vae |
Step 2: Choose Your Model Version
You have two options for running the Flux1 Kontext Dev model. Download one of the model files below and place it into the following folder in your ComfyUI folder directory: ..\ComfyUI\models\diffusion_models
-
FP8 model – ideal for systems with higher VRAM and better performance.
-
GGUF model – Optimized for systems with 12GB VRAM or less. Choose the version that best matches your available VRAM.
Before we move on to downloading the workflow itself, we need to make sure our ComfyUI setup is up to date. Let’s do that together next!
3. Setting Up the Flux1 Kontext Workflow in ComfyUI
Before you can start working with the Flux1 Kontext Dev model in ComfyUI, it’s important to ensure your setup is fully updated and configured correctly. This ensures that all required nodes and dependencies are in place and prevents errors during workflow imports.
Follow the steps below to update your ComfyUI installation and import the correct workflow for your chosen model version (FP8 or GGUF).
Step 1: Update ComfyUI
Keeping ComfyUI up to date is crucial for compatibility with newer workflows and nodes.
Option A – Windows Portable Users:
Navigate to this folder in your ComfyUI directory:
ts1 2📁 ComfyUI_windows_portable\ 3 └── 📁 update\ 4 └── 📄 update_comfyui.bat
- Run the batch file by double-clicking it.
update_comfyui.bat
Option B – Non-Windows Users:
Run one of the following commands in your terminal, depending on your environment:
- RunPod users:
ts1/workspace/ComfyUI/venv/bin/python -m pip install -r /workspace/ComfyUI/requirements.txt
- Other Linux/macOS users:
ts1/ComfyUI/venv/bin/python -m pip install -r /ComfyUI/requirements.txt
Step 2: Choose and Download Workflow
Depending on whether you’re using the FP8 or GGUF version of the model, download the corresponding workflow JSON file:
-
Flux Kontext FP8 Dev Model Workflow – for higher VRAM systems
-
Flux Kontext GGUF Dev Model Workflow – for lower VRAM systems (≤12GB)
3. Import one of the workflow in ComfyUI:
-
Open ComfyUI.
-
Drag and drop the JSON file onto the canvas.
Once imported, the workflow will be ready to use. In this example, we’ll be working with the Low VRAM GGUF workflow, which utilizes the Unet Loader GGUF node.
4. Flux Kontext Dev: Image-to-Image Generation Configuration
For our example, we’ll use the Low VRAM GGUF Q3 model so more users can try it out easily (≤12GB VRAM needed). The main difference in the workflow is the node used to load the model:
-
If you’re using the GGUF model, the workflow uses the Unet Loader (GGUF) node.
-
If you opt for the FP8 model, the workflow includes the Load Diffusion Model node instead.
Step 1: Load the Initial Image
Start by loading your initial image into the Load Image node. This is the image you want to edit.
Step 2: Select the Model
Within the Unet Loader (GGUF) node, select your GGUF model file. in our case this is the "flux1-kontext-dev-Q3_K_S.gguf" model
Step 3: Configure Text Encoders
For the DualClipLoader node, select the previously downloaded text encoder models and do not forget to set the type to flux:
-
clip_name: t5xxl_fp16.safetensors
-
clip_name2: clip_l.safetensors
-
Type: Flux
Step 4: Load VAE\
Select the previously downloaded ae.safetensors file within the Load VAE node.
Step 5: Provide Your Prompt
Enter a prompt describing how you want to edit your image. In our case, we’ll use this prompt to edit the original image: "keep everything intact, maintain the same character, but change her hair to blue and transform the background into a sunny beach scene with tropical plants."
Step 6: Adjust Workflow Settings
Navigate to the Settings tab and configure the following nodes:
-
FluxGuidance: 2.5 (Controls how strongly your prompt influences the final result)
-
RandomNoise:
- control_after_generater: random (varied outputs every time you generate)
-
EmptySD3LatentImage: this determines the resolution of your edited output image.
- Width: 1024
- Height: 1024
-
KSamplerSelect:
- sampler_name: euler
-
BasicScheduler:
-
Scheduler: beta
-
Steps: 30 (higher steps mean more detail but longer render times)
-
Denoise: 1 (this means starting from full noise, letting the model generate the image from scratch)
-
Step 7: Generate
After all settings are configured, click RUN to start the image editing process!
5. Examples Using the Flux Kontext Dev Model
With everything set up and configured, you’re ready to dive in and see what the Flux Kontext Dev model can do in ComfyUI. This section provides practical examples to help you unlock the model’s impressive capabilities for image editing and creative transformations.
Here are some exciting ways you can use the Flux Kontext Dev model, along with examples:
Example 1: Hairstyle Change
Prompt: Maintain the same woman, desert background, and outfit, but change her hair color to red.
Example 2: Pose Modifications
Prompt: Keep the same woman, outfit, and background, but change her pose so she’s tilting her head slightly to the side, touching her collarbone gently with her fingertips, adding a sensual, engaging expression.
Example 3: Expression Changes
Prompt: Keep the same woman, outfit, and lighting, but change her expression to a sultry half-smile, eyes slightly narrowed with a subtle glint of mischief.
Example 4: Colorization
Prompt: Colorize the image while keeping the same composition and vintage feel. Give the boy light brown hair and warm hazel eyes. Make the dog’s fur golden beige. Add natural skin tones and a soft warm sunlight glow. Preserve the film grain and nostalgic atmosphere
Example 5: Realistic to Anime
Prompt: Transform the portrait into anime-style art, preserving her tousled chestnut hair, smoky makeup, silky camisole, and sensual expression. Keep the same intimate mood.
Example 6: Text Replacement or Editing
Prompt: Transform the portrait into anime-style art, preserving her tousled chestnut hair, smoky makeup, silky camisole, and sensual expression. Keep the same intimate mood.
Example 7: Background Swapping
Prompt: Keep the same woman, outfit, and lighting, but change the background to a rooftop bar at night.
There are countless other creative use cases you can explore with the Flux Kontext Dev model—whether it’s subtle edits, dramatic transformations, or unique artistic effects. Feel free to experiment and discover what works best for your projects. Next, we’ll briefly cover the best prompting techniques to help you achieve consistent and high-quality image outputs every time.
5. Flux1 Kontext Prompt Techniques
This guide outlines best practices for prompting the Flux1 Kontext Dev Model in ComfyUI. Whether you're making visual edits, transferring styles, or preserving character consistency, these examples will help you get clear, reliable results from your image-to-image workflows.
- Basic Modifications,
-
Simple and direct: "Change the car color to red",
-
Maintain style: "Change to daytime while maintaining the same style of the painting",
- Style Transfer Principles:
-
Clearly name style: "Transform to Bauhaus art style",
-
Describe characteristics: "Transform to oil painting with visible brushstrokes, thick paint texture",
-
Preserve composition: "Change to Bauhaus style while maintaining the original composition",
- Character Consistency Framework:
-
Specific description: "The woman with short black hair" instead of "she",
-
Preserve features: "while maintaining the same facial features, hairstyle, and expression",
- Text Editing,
-
Use quotes: "Replace 'joy' with 'BFL'",
-
Maintain format: "Replace text while maintaining the same font style",
By following these prompt structures, you can achieve more accurate and consistent image edits while maintaining creative control. Combine clarity with detail, and remember—small adjustments in wording can lead to big improvements in output quality.
6. Conclusion
In conclusion, utilizing the NEW Flux1 Kontext dev model with ComfyUI opens up a world of possibilities for image editing and creative projects. By following this tutorial, you have learned how to set up your environment, install the necessary models, configure your workflow, and explore practical examples of the model in action. Remember, the key to mastering this tool lies in experimentation and practice.
As you continue to work with the Flux Kontext model, don’t hesitate to revisit this guide for reference. Additionally, keep an eye on updates and new features that may enhance your experience further. Happy creating!