How to Run Flux Kontext Dev on RunPod

July 8, 2025
ComfyUI
RunPod
How to Run Flux Kontext Dev on RunPod
Learn to set up Flux Kontext Dev on RunPod using ComfyUI. This tutorial covers account setup, pod creation, and workflow management for Image to Image Editing.

1. Introduction

Flux Kontext Dev is the latest model in the Flux family—designed for high-quality image editing, creative variation, and concept blending using both text and image prompts. Whether you want to make subtle changes to an image, generate multiple stylistic versions, or merge elements from several references, Flux Kontext Dev gives you the flexibility and control to do it all.

In this tutorial, you’ll learn how to run Flux Kontext Dev on RunPod using a one-click ComfyUI template, download the required models, and import a set of ready-to-go workflows so you can start generating results in minutes. We’ll begin with a simple editing setup—then move into advanced multi-image pipelines that let you blend ideas across different visual sources.

2. Creating and Funding Your RunPod Account

To get started with running Flux Kontext Dev using ComfyUI on RunPod, first create a RunPod account. Visit the RunPod website and sign up using your preferred email or social login.

Once your account is active, you’ll need to add funds to deploy GPU pods. We recommend an initial deposit of at least $10 to cover your compute and storage needs during setup and testing. You can add funds easily in the Billing section of your RunPod dashboard.

💡 Tip: Use our Runpod Link to sign up and load at least $10 into your account—you’ll instantly receive a bonus credit ranging from 5-500 USD, giving you extra GPU time to jumpstart your projects!

3. Deploying the ComfyUI Template on RunPod

Now that your RunPod account is funded, it’s time to deploy a GPU pod using our custom Next Diffusion – ComfyUI template. This template provides a ready-to-go ComfyUI environment in the cloud, but you’ll still need to download the required Flux Kontext Dev models and import custom workflows to get everything running smoothly.

For this example, we’ll choose RunPod Secure Cloud, which is necessary if you want to attach a network volume for persistent storage. A network volume acts like an external hard drive in the cloud that stays connected to your pod across sessions. This means all your downloaded models, imported workflows, and generated outputs are saved safely — so when you restart your pod, ComfyUI boots up almost instantly with everything ready to go. No need to re-download large files or set up your workflows again.

If you want a detailed explanation of network volumes or how to set up persistent storage, check out our full guide here: 👉 How to Run ComfyUI on RunPod with Network Volume

Deploying Your Pod & GPU Selection

  • From the RunPod sidebar, go to Pods → Deploy New Pod.

  • At the top, choose your cloud option:

    • Select Secure Cloud if you want to attach a network volume for persistent storage.

    • Or choose Community Cloud if you don’t need persistent storage and want a simpler, more affordable option.

  • If you’ve created a network volume, attach it using the dropdown in the deployment options (only available on Secure Cloud).

  • Choose a GPU with at least 24GB of VRAM — we recommend the RTX 4090 for optimal performance.

Selecting the Next Diffusion - ComfyUI Template

Now it’s time to pick the right Template for your pod.

  • Scroll down and click Change Template

  • Look for Next Diffusion – ComfyUI (if you cannot find it, please make sure to click the following link to automatically select the template)

  • Select it

    This template will ensure that ComfyUI and the ComfyUI Manager are properly installed and ready for reuse on your persistent network volume storage.

Final Settings

  • GPU Count: Keep this at 1

  • Pricing Type: Choose On-Demand

    At the time of writing, this costs around $0.69/hour for an RTX 4090 — not bad for high-end performance.

Launch Your Pod

Scroll to the bottom and click Deploy On-Demand to initialize your pod.

You’ll be redirected to the My Pods section where your selected GPU instance will begin spinning up with the ComfyUI template.

After Deployment

After deploying your pod, head to the Pods section in the sidebar. Find and expand the pod you just launched — its status should switch to “Running” shortly.

Note: If you're not using a network volume, the initialization process may take longer as ComfyUI and its dependencies need to be installed from scratch. You can click the Logs tab to monitor the setup progress in real time.

Accessing Your VS Code Server

With your pod fully initialized and ready, let’s open the VS Code Server to start working on your ComfyUI setup directly inside the pod.

  1. Click the Connect button on your pod.

  2. Select HTTP Service → :8888 (VS Code Server)

This will open a browser-based VS Code environment, which acts as your workspace inside the pod. It gives you direct access to your ComfyUI file system, including folders for models, workflows, scripts, outputs, and more—everything you need to manage and customize your setup.

Here's what the VS Code interface looks like when accessed through port 8888 👇

In the next section, we’ll use this interface to download the models and files required for Flux Kontext Dev, so you can get your workflow up and running in no time.

4. Flux Kontext Dev Setup: Downloading Required Models and Files

Now that your pod is running and you’ve opened the VS Code Server on port 8888, it’s time to set up your environment for the Flux Kontext Dev workflow.

This cloud-based interface gives you full access to your ComfyUI file system — models, extensions, scripts, and everything else lives here.

This is your home base — where you’ll manage files, adjust settings, and prepare the core components needed to power image-to-image workflows with Flux Kontext Dev.

Opening the Terminal in VS Code Server

To get started with the setup, you’ll need to access the terminal inside your cloud-based workspace. Here’s how:

  1. Open the Terminal Panel
    Press Ctrl + J to open the terminal instantly. Alternatively, click the Toggle Panel icon in the top-right corner of VS Code.


    The terminal should open in the /workspace directory by default. That’s exactly where you need to be to run the setup commands.

  2. You're Ready to Run Commands
    From here, you’ll be downloading all the required model files directly into the correct folders with a single command.

Overview of Required Files

Here’s a quick look at the files we’re going to download — they’re essential to making Flux Kontext Dev run properly:

  • 1. Variational Autoencoder (VAE)
    Required to decode latent representations into images.
    🔗 ae.safetensors
    📁 Save to: ComfyUI/models/vae/

  • 2. CLIP and Text Encoder Models
    Used for prompt processing and conditioning.
    🔗 t5xxl_fp16.safetensors
    🔗 clip_l.safetensors
    📁 Save to: ComfyUI/models/clip/

  • 3. Flux Kontext Dev Model (FP8)
    The main diffusion model used in your workflow.
    🔗 flux1-dev-kontext_fp8_scaled.safetensors
    📁 Save to: ComfyUI/models/diffusion_models/

Downloading the Files via Terminal

Now that your terminal is open and you're in the /workspace directory, copy & run the command below to download all required model files into their correct folders. This will set up everything needed for the Flux Kontext Dev workflow:

bash
1
2cd ComfyUI/models/vae && \
3
4wget -O ae.safetensors https://huggingface.co/Comfy-Org/HiDream-I1_ComfyUI/resolve/main/split_files/vae/ae.safetensors && \
5
6cd ../clip && \
7
8wget -O t5xxl_fp16.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors && \
9
10wget -O clip_l.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors && \
11
12cd ../diffusion_models && \
13
14wget -O flux1-dev-kontext_fp8_scaled.safetensors https://huggingface.co/Comfy-Org/flux1-kontext-dev_ComfyUI/resolve/main/split_files/diffusion_models/flux1-dev-kontext_fp8_scaled.safetensors

Here’s how it looks when pasted into the terminal inside your VS Code Server (from the /workspace directory):

Once pasted, press Enter and let the model files download. This may take a short while depending on your Pods network speed.

Verifying Your File Structure

Once everything is downloaded and organized correctly, your directory tree should look like this:

ts
1📂 ComfyUI/
2├── 📂 models/
3│   ├── 📂 clip/
4│   │   ├── clip_l.safetensors
5│   │   └── t5xxl_fp16.safetensors
6│   ├── 📂 vae/
7│   │   └── ae.safetensors
8│   └── 📂 diffusion_models/
9│       └── flux1-dev-kontext_fp8_scaled.safetensors

With your model files in place, you're ready to launch ComfyUI and start working inside the Flux Kontext Dev workflow environment.

5. Launching ComfyUI with GPU Access

Now that your environment is fully prepped with the required files, it’s time to launch ComfyUI and access the interface.

Starting ComfyUI from VS Code

We’ll use a script that’s already included in your workspace to launch ComfyUI with GPU acceleration.

Here’s how to do it:

  1. Open the Terminal
    Make sure you're in the /workspace root directory inside VS Code Server.

  2. Run the GPU startup script
    Paste the following command and press Enter:

    ts
    1./run_gpu.sh

This script initializes ComfyUI with full GPU support. It works because the run_gpu.sh file is located directly inside your /workspace directory — no need to navigate elsewhere.

👇 Here’s what that looks like in your terminal before hitting Enter:

Once the initialization is complete, you'll see a message confirming that the ComfyUI interface is live on port 8188.

⚠️ Important: Leave this terminal session running — it keeps ComfyUI alive and shows helpful log info for troubleshooting.

Opening the ComfyUI Interface

  • Return to the Pods section in your RunPod dashboard.

  • Click Connect on your active pod.

  • Select HTTP Service → :8188 to launch ComfyUI in a browser tab.

🎉 ComfyUI is now live with GPU acceleration, and your Flux Kontext Dev environment is fully set up.

In the next section we’ll walk through how to import the Flux Kontext Dev workflow into ComfyUI and run your first image-to-image generation using prompt-based editing.

6. Setting Up the Flux Kontext Dev Workflow in ComfyUI

Now that ComfyUI is running and your essential models are downloaded, it’s time to load and configure the Flux Kontext Dev workflow. This setup enables powerful image-to-image editing with prompt-driven adjustments, allowing you to transform images with precision and creativity.

1. Load the Flux Kontext Dev FP8 Workflow

Start by loading the Flux Kontext Dev workflow:

👉 Download the Flux Kontext Dev FP8 workflow JSON file and drag it into your ComfyUI canvas.

This workflow template includes all necessary nodes arranged for smooth prompt-based image editing and generation.

2. Configure the Workflow Nodes

Double-check the key nodes in the workflow to ensure all model references and parameters are correctly set:

NodeConfiguration
Load Diffusion Modelflux1-dev-kontext_fp8_scaled.safetensors
DualClipLoaderclip_name: t5xxl_fp16.safetensors, clip_name2: clip_l.safetensors,type: Flux
Load VAEae.safetensors
FluxGuidance2.5 – Controls how strongly the prompt affects the image
RandomNoisecontrol_after_generate: random – Adds variation to each output
EmptySD3LatentImageWidth: 1024, Height: 1024 – Sets the resolution of the output image
KSamplerSelectsampler_name: euler
BasicSchedulerScheduler: beta, Steps: 30, Denoise: 1

3. Test with an Image-to-Image Prompt

Now that your Flux Kontext Dev workflow is up and running, it’s time to put it to the test. Using image-to-image prompts, you can transform your source images with precise, creative control—whether that’s tweaking an outfit, changing lighting, or completely reimagining a scene.

Getting started is easy:

  1. Upload your reference image into the Load Image node.

  2. Adjust your prompt to describe the desired changes.

  3. Click Run to generate the edited image.

  4. Review and fine-tune as needed to perfect your result.

Below are some professional example prompts paired with comparison images showcasing how Flux Kontext Dev excels at Character-Focused Edits—from clothing and pose changes to creating detailed mugshots and anime-inspired headshots. These examples highlight the versatility and precision you can achieve when refining your character visuals.

Fashion & Styling — Outfit Change Examples

Prompt:

Keep the woman’s pose, expression, and clean white studio background exactly the same. Outfit her in a bodycon midi skirt with a bold animal print that emphasizes her hips, combined with a silky white blouse featuring a deep V-neck that reveals tasteful cleavage.

Now that you've seen how a single prompt can transform a look, here are more curated examples showcasing different outfit styles and visual refinements — all built on the same character, pose, and background. These demonstrate how you can creatively guide wardrobe, mood, and aesthetics with just a prompt. Perfect for fashion look books!

Note: All individual examples above rendered at 720x1280, 30 steps, ~30 seconds each on RTX 4090 (24GB VRAM). Keep in mind, the first render might take a bit longer as the system initializes.

Pose & Expression Edits — Dynamic Character Variations

Prompt:

Keep the woman’s outfit, hair, and styling the same. brunette hair, glossy red lips, white blouse with cleavage, and cream mini skirt. Set against a clean white studio background with soft lighting. Now have her pose confidently with one hand resting on her hip and the other touching her neck. Her body turned slightly to the side, with a strong but soft facial expression. She should stand tall, emphasizing her curves and attitude.

Now that you’ve seen how a single prompt can transform posture and expression, here are curated examples highlighting a range of poses, expressions, and moods — all built on the same character, outfit, and background. Perfect for editorial shoots, lookbooks, and creative storytelling.

The examples above focused on pose, outfit & expression transformations — proving how much you can shift mood, style, and attitude with just a single prompt. But this is only the beginning.

These pose and expression variations aren’t just great for creative editing — they’re also a solid foundation for building your own custom dataset. By generating consistent character shots across different angles and moods, you can prepare high-quality image sets ideal for Flux Dev LoRA training. See this article for more info on preparing and training LoRA models with Flux Dev 👇

How to Train a Flux LoRA with FluxGym on RunPod

You can also change the aspect ratio — simply set your dimensions to 1280x720 for a 16:9 layout. Combined with the right prompt, this allows for dynamic wide-angle compositions from your original 9:16 image.

If you're ready to push the boundaries further, Flux Kontext Dev offers powerful multi-image workflows that unlock even more creative control. Blend different reference shots, evolve a character across scenes, or merge multiple compositions with precision. Let your imagination go wild — from complex style boards to cinematic sequences, this is where prompt-driven image editing steps up to the next level.

7. BONUS: Multi-Image Flux Kontext Dev Workflows

If you want to go beyond single-image editing, Flux Kontext Dev offers powerful multi-image workflows that let you creatively blend and control multiple source images at once. Whether you're refining fashion shoots, swapping outfits, or fine-tuning complex scenes, these workflows give you flexible, precise ways to shape your final output.

To get you started quickly, we’ve built two distinct workflows tailored to different creative needs:

  • Workflow A — Chained Reference Latents
    Best for precise edits where each image independently influences the result — like controlling face and background or clothing and pose separately. It requires more VRAM and takes longer to run due to multiple encodes, but rewards you with exceptional detail and control.
    👉 Download Workflow A

  • Workflow B — Stitched Canvas
    Ideal for quick drafts or merging outfits when your references share similar lighting and angles. This workflow is faster and simpler, trading some granular control for efficiency.
    👉 Download Workflow B

For full step-by-step instructions, in-depth explanations, and tips on how to make the most of these multi-image workflows, check out the complete guide here 👇

Multi-Image Flux Kontext Dev Workflows in ComfyUI

8. Conclusion

Congratulations! You’ve now completed the entire process of setting up Flux Kontext Dev on RunPod — from creating and funding your account, deploying ComfyUI with GPU access, to downloading models and configuring powerful workflows.

With both single-image and multi-image editing options at your fingertips, you’re equipped with flexible tools that make high-quality, prompt-driven image editing accessible and efficient. Whether you’re a beginner or looking to enhance your creative workflow, you can achieve professional results without the need for a high-end local GPU.

Running Flux Kontext Dev on RunPod gives you the freedom to experiment with fashion edits, branding visuals, and complex scene manipulations — all fully scalable and ready to grow with your projects.

Frequently Asked Questions

AI Video Generation

Create Amazing AI Videos

Generate stunning videos with our powerful AI video generation tool.

Get Started Now
OR