How to Generate 4K Images in ComfyUI with FLUX & DyPE

November 1, 2025
ComfyUI
How to Generate 4K Images in ComfyUI with FLUX & DyPE
Learn how to generate stunning 4K+ images with Flux and DyPE (Dynamic Position Extrapolation) in ComfyUI. Step-by-step guide for setup, workflow, and ultra-high-res AI results.

1. Introduction

In this tutorial, you’ll learn how to use Flux and DyPE (Dynamic Position Extrapolation) in ComfyUI to create high-resolution images. Discover how to turn your ideas into detailed, eye-catching visuals that stand out. DyPE ensures every part of your image stays consistent and polished, even at 4K and beyond. We’ll guide you step by step, from setup to generating your first ultra-high-res masterpiece, and also explore limitations and practical workflow considerations.

2. System Requirements for Flux & DyPE

Before generating ultra-high-resolution images, ensure your system meets the hardware and software requirements to run the Flux + DyPE workflow smoothly inside ComfyUI. This setup benefits from a capable GPU for faster processing β€” while local GPUs work, you may also use a cloud GPU provider like RunPod for optimal performance.

Requirement 1: ComfyUI Installed & Updated

To get started, make sure you have ComfyUI installed locally or via a cloud setup. For Windows users, follow this guide:

πŸ‘‰ How to Install ComfyUI Locally on Windows

After installation, open the Manager tab in ComfyUI and click Update ComfyUI to ensure you’re running the latest version. Keeping ComfyUI updated guarantees compatibility with the latest workflows, nodes, and models used by Flux + DyPE.

While the workflow can run on a local machine, we highly recommend using a ready-to-use ComfyUI template on RunPod for this setup. Here’s why:

  • Plug-and-Play Setup β€” no need to manually configure CUDA or PyTorch; everything is ready for Flux + DyPE out-of-the-box.

  • Sage Attention & Triton Acceleration β€” pre-installed and optimized in the RunPod template, greatly improving generation speed and VRAM efficiency.

  • Persistent Storage with Network Volume β€” keeps your models and workflows saved, so you don’t have to re-download or set up nodes each session.

You can have a ready-to-use ComfyUI instance running in just a few minutes using the RunPod template below:

πŸ‘‰ How to Run ComfyUI on RunPod with Network Volume

Requirement 2: Download Flux & DyPE Model Files

The Flux + DyPE workflow relies on Flux-based diffusion models, text encoders, and VAEs to generate ultra-high-resolution images with coherence and detail.

Download each of the following models and place them in their respective ComfyUI directories exactly as listed below:

File NameHugging Face Download PageFile Directory
flux1-dev-fp8.safetensorsπŸ€— Download..\ComfyUI\models\diffusion_models
clip_l.safetensorsπŸ€— Download..\ComfyUI\models\text_encoders
t5xxl_fp16.safetensorsπŸ€— Download..\ComfyUI\models\text_encoders
ae.safetensorsπŸ€— Download..\ComfyUI\models\vae

πŸ’‘ Note: You can also use the Flux Krea Dev FP8 model as an alternative model with this workflow. It’s fully compatible with Flux + DyPE and can produce slightly different visual characteristics depending on your prompt and settings. Place it in the same folder: ..\ComfyUI\models\diffusion_models

Requirement 3: Verify Folder Structure

Before running the Flux + DyPE workflow, make sure all downloaded model files are placed in the correct ComfyUI subfolders. Your folder structure should look like this:

ts
1πŸ“ ComfyUI/
2└── πŸ“ models/
3    β”œβ”€β”€ πŸ“ diffusion_models/
4    β”‚   └── flux1-dev-fp8.safetensors or flux1-krea-dev_fp8_scaled.safetensors
5    β”œβ”€β”€ πŸ“ vae/
6    β”‚   └── ae.safetensors
7    β”œβ”€β”€ πŸ“ text_encoders/
8    β”‚   β”œβ”€β”€ clip_l.safetensors
9    β”‚   └── t5xxl_fp16.safetensors

Once everything is installed and organized, you’re ready to load the Flux + DyPE workflow in ComfyUI and start generating ultra-high-resolution images. DyPE intelligently manages image sections to maintain detail and coherence, allowing you to produce large, visually captivating images efficiently.

3. Download & Load the Flux & DyPE FP8 Workflow

Now that your environment and model files are set up, it’s time to load and configure the Flux + DyPE workflow in ComfyUI. This setup ensures all model connections, text encoders, VAEs, and DyPE nodes work together seamlessly for ultra-high-resolution image generation. Once configured, you’ll be ready to generate stunning 4K+ images with exceptional detail and coherence.

Load the Flux + DyPE FP8 Workflow JSON File

πŸ‘‰ Download the Flux + DyPE FP8 Workflow JSON file and drag it directly into your ComfyUI canvas.

This workflow comes fully pre-arranged with all essential nodes, model references, and DyPE components required for high-quality, coherent ultra-high-resolution image generation.

Install Missing Nodes

If any nodes appear in red, it means certain custom nodes are missing.

To fix this:

  1. Open the Manager tab in ComfyUI.

  2. Click Install Missing Custom Nodes.

  3. After installation, restart ComfyUI to apply the changes.

This will ensure all missing nodes are properly installed and ready to generate images seamlessly.

4. Running the Flux & DyPE Workflow

Now that you have successfully loaded the Flux + DyPE workflow, it’s time to run it and generate your ultra-high-resolution images. This step involves configuring key parameters to ensure your output meets your expectations while maintaining coherence and detail.

Step 1: Set Target Resolution

Start with the EmptyS3LatentImage node and set your target resolution to 2160 Γ— 2160 pixels for standard 4K output (1:1 ratio).

Next, in the DyPE for Flux node, adjust the width and height to match your target resolution, keeping both equal or below 1024 pixels for performance reasons. For a 1:1 image, set width to 1024 and height to 1024.

⚠️ Important: Matching the resolution ensures DyPE can generate a coherent and seamless ultra-high-resolution image without overloading your system.

Step 2: Power LoRA Loader (Optional)

To enhance style or add more detail using Flux Dev LoRAs, use the Power LoRA Loader node. This allows you to integrate extra LoRAs smoothly into your workflow. Remember to include the appropriate trigger words in your prompt when applying a LoRA.

Step 3: Configure Prompt

Enter a descriptive prompt in the Text Encode node to guide Flux in generating your image. For example:

Photorealistic image of a korean woman, captured in a Cowboy shot, from below, with a close-up perspective. She has a beautiful face with makeup, red lipstick, and black eyeshadow, and captivating beautiful eyes looking directly at the viewer. Her hair with is long and wavy, cascading around her. She has large breasts and wide hips, and is toned. She has a prominent tattoo, including a dragon tattoo on her leg, and elegant jewelry. She is wearing a black and gold silk dress with an elegant side slit that explicitly exposes her hips and reveals ample cleavage. The fabric is delicately draped, showcasing its intricate design. The setting is an Oriental palace, Asian-themed, inside, bathed in beautiful lights. The overall ambiance is luxurious and high fashion. 4K resolution, detailed skin texture, detailed silk texture, dramatic lighting, cinematic composition, captivating elegance, vibrant colors, exquisite details, frame filled with luxury, high fashion ambiance, modern sensibility, ambient lighting, professional effects, sleek silhouette, refined composition, polished and stylish, exposed hips, cleavage.

Experiment with different scenes, styles, and descriptors to achieve your vision.

Step 4: Adjust Sampler Settings

In the KSampler node, set the number of steps. A typical value is 30 steps, balancing generation speed and image detail. You can decrease this for faster generations or increase it for more refined output.

Step 5: Optional Performance Nodes

If you are using the Next Diffusion - Sage Attention Template on RunPod, you will be able to use the two additional nodes:

  • Patch Sage Attention KJ

  • Model Patch Torch Settings

These speed up rendering via Sage Attention and Triton Acceleration.

⚑ Tip: If Sage Attention or Triton is not installed locally, you can disable both nodes by selecting them and pressing Ctrl + B.

Step 6: Run Generation

After configuring your resolution, prompt, sampler steps, and optional LoRAs, run the workflow.

DyPE will dynamically extrapolate positions, producing a seamless ultra-high-resolution image.

⚑ On an RTX 4090 (24GB VRAM), a 2160 Γ— 2160 image takes roughly ~90 seconds. For this workflow i highly recommend using a cloud GPU provider like RunPod.

5. DyPE Limitations

While DyPE is powerful for generating ultra-high-resolution images, it has an important limitation: it works best with 1:1 resolution, such as 2160 Γ— 2160 pixels (4K). Using other resolutions can cause weird proportions, like duplicated heads, extra limbs, stretched features or small artifacts, because DyPE is optimized for square canvases.

Here is an example of a 16:9 ratio standard 4K image (3840 Γ— 2160). As you can see, using a non-square resolution can lead to distorted proportions, duplicated features and image artifacts compared to the 1:1 output.

6. Conclusion

Congratulations! You’ve learned how to use Flux and DyPE in ComfyUI to generate ultra-high-resolution images with 4K+ detail. By leveraging DyPE’s dynamic position extrapolation, you can create coherent, visually stunning images perfect for digital art, concept design, marketing visuals, and creative storytelling. Experimenting with prompts, sampler steps, and DyPE settings allows you to unlock the full potential of your Flux image generation workflow. With this knowledge, you’re ready to produce high-resolution, professional-quality images that captivate and inspire.

Frequently Asked Questions

Custom LoRA Training for Flux Dev Model

Train Custom Character LoRAs for Flux Dev

Automatically generate a dataset, create captions, and train LoRAs from a single image.

Start Training Now
OR