How to run ComfyUI on Runpod?
Table of Contents
- 1. Introduction running ComfyUI
- 2. Create and Fund Your RunPod Account
- 3. Creating a Network Volume and Choosing a Region
- 4. Deploying the ComfyUI Pod
- 5. Verifying the Pod & Installing ComfyUI
- 6. Launching ComfyUI from VS Code Server
- 7. Running Your First ComfyUI Workflow
- 8. Bonus: Monitor Disk Usage in Seconds
- 9. Conclusion: You’re All Set!
1. Introduction running ComfyUI
In the world of generative AI, ComfyUI has become a go-to tool for developers, creators, and researchers alike. Known for its modular, node-based workflow system, ComfyUI makes building and experimenting with AI pipelines both intuitive and powerful. However, many of its advanced workflows demand high-performance GPUs to run efficiently — and that’s where RunPod comes in. RunPod offers scalable, on-demand cloud GPUs, giving you access to the power you need without investing in expensive hardware.
In this guide, we’ll walk you through the entire process of running ComfyUI on RunPod — from setting up your account and creating a persistent network volume to deploying a GPU pod and running your first workflow. Whether you’re generating images, testing custom models, or building new AI tools, this guide will get you up and running with ComfyUI in the cloud.
2. Create and Fund Your RunPod Account
To begin your journey with ComfyUI on RunPod, the first step is to create a RunPod account. Head over to the Runpod website and sign up for a new account using your preferred credentials.
Once your account is set up, you'll need to add funds to begin deploying GPU pods. A minimum deposit of $10 is recommended to get started. You can easily do this through the Billing section of your RunPod dashboard.
> 💡 Tip: This initial funding covers your GPU pod usage and storage, ensuring you don’t run into interruptions during setup or testing.
Now that your account is funded, the next step is to create a persistent storage solution to retain your ComfyUI setup and models across sessions.
3. Creating a Network Volume and Choosing a Region
ComfyUI workflows often involve large model files and custom node setups, which can be tedious to redownload or reconfigure each time. This is where RunPod’s Network Volumes come in.
A Network Volume is persistent cloud storage that connects to your GPU pods. It allows you to retain your ComfyUI installation, downloaded models, extensions, and custom workflows even after your pod has been stopped or terminated. Without a network volume, your data will be wiped when a pod is terminated — meaning you’d need to redo the entire setup every time.
How to Create a Network Volume on RunPod
-
Log in and access the “Storage” section
After logging into your RunPod dashboard — and with some funding in your account — click on “Storage” in the left-hand sidebar. This is where you'll manage your network storage volumes. -
Click “New Network Volume”
On the Pods page, click the “New Network Volume” button (typically at the top left). This will open the network volume creation form. -
Select your region and name your volume
Select a datacenter to create your network volume region to where you’ll be deploying your GPU pods. This helps minimize latency and improves performance. As you select a region, the available GPUs for that location will appear on the right side of the screen — use this to determine the best fit for your needs. Next, enter a clear, descriptive name for your volume, such as “ComfyUI Storage”, to keep things organized. -
Set your storage size
Choose how much space you’ll need. We suggest starting with 100GB, which should comfortably store your models, ComfyUI install, and workflows. -
Review and create your volume
Once you’ve filled in all the details, click “Create Network Volume” to provision your Network Volume.
💡 Pro Tip: Before creating your volume, check which region offers the GPUs you plan to use consistently — this helps avoid having to move large files later on.
⚠️ Important Notes on Network Volumes
-
Pricing
Network Volumes cost $0.07 per GB per hour — that’s about $7/month for 100GB!
-
Persistence
Your volume retains all ComfyUI files, models, and workflows — even after a pod is stopped or deleted — saving you from re-downloading and reconfiguring everything. -
Region Lock-In
Volumes are region-specific. If you change GPU regions later (e.g., EU-RO → US-CA), you’ll need to manually transfer your data to a new volume in that region. -
Volume Size is One-Way
You can increase your volume size later, but you can’t decrease it. It's smart to start with a modest size (e.g., 50–100GB) and scale up as needed.
For this guide, we’ll select the EU-RO-1 region, as it currently offers great access to the RTX 4090 (24GB VRAM) — a powerful and reliable option for most ComfyUI workflows. With your network volume ready, let’s move on to deploying your first pod using the Next Diffusion – ComfyUI template.
4. Deploying the ComfyUI Pod
Now that your Network Volume is ready, it’s time to deploy a pod using the Next Diffusion – ComfyUI template. This custom template is designed to install both ComfyUI and ComfyUI Manager onto your Network Volume, ensuring everything is neatly organized and persistent across sessions.
Accessing Your Network Volume
In the previous step, we created our network volume — now let’s put it to use.
-
Head over to the Storage section in the left sidebar.
💡 You may need to refresh the page for your new volume to appear.
-
You should now see your created volume (e.g., ComfyUI Storage – 100GB). Click the Deploy button next to it.
-
This redirects you to the Pods route, where the volume is already selected in the navbar and you're placed in the Secure Cloud section.
🚫 Note: Network Volumes are not compatible with the Community Cloud — only Secure Cloud supports them.
GPU Selection
Next, choose an available GPU you'd like to use from the list. For this tutorial, we’ll go with the RTX 4090, which is great for running heavy ComfyUI workflows.
⚡ Tip: For the initial setup, you can select a cheaper GPU, since the first launch mainly installs ComfyUI and Manager onto your volume — this process can take a while, and using a lower-cost GPU can save you money. But hey, we’re going all in with the RTX 4090 this time!
Selecting the Next Diffusion - ComfyUI Template
Now it’s time to pick the right Docker image (ComfyUI template) for your pod.
-
Click Change Template
-
Look for Next Diffusion – ComfyUI (if you cannot find it, please make sure to click the following link to automatically select the template: Next Diffusion – ComfyUI)
-
Select it
This template will ensure that ComfyUI and the ComfyUI Manager are properly installed and ready for reuse on your persistent network volume storage.
Final Settings
-
GPU Count: Keep this at 1
-
Pricing Type: Choose On-Demand
At the time of writing, this costs around $0.69/hour for an RTX 4090 — not bad for high-end performance.
Launch Your Pod
Scroll to the bottom and click Deploy On-Demand to initialize your pod.
You’ll be redirected to the My Pods section where your selected GPU instance will begin spinning up with the ComfyUI template.
5. Verifying the Pod & Installing ComfyUI
Once you've deployed your pod, head over to the Pods section in the sidebar. You’ll see your pod listed — it should show a status like “Initializing” or “Starting.”
Checking Initialization Logs
-
Click to expand your pod’s panel.
You’ll notice a few logs at the bottom indicating that the environment is being prepared. This includes downloading and setting up the necessary tools from the selected template — nothing to worry about, just let it run. It looks something like this: -
After a short while, click on the Logs tab.
🎉 This is where the magic happens.
Here, you’ll be able to follow the live setup process. The template is automatically installing ComfyUI and the ComfyUI Manager directly onto your attached Network Volume. This may take a few minutes (took around 10minutes for me), especially during the first setup.
Once everything is successfully installed, you’ll see confirmation logs like these:
At the end of the setup, you should see a line that says "✅ VS Code is available at port 8888" has been initialized — this means the VS Code Server is ready and running.
Accessing the VS Code Server
To open your development environment:
-
Close the Logs panel.
-
Click the Connect button on your pod.
-
Select HTTP Service → :8888 (VS Code Server)
This will open a browser window with a Visual Studio Code environment, preloaded with your ComfyUI & ComfyUI manager installed.
✅ Since this is now stored on your Network Volume, any future pods using the same volume will skip the setup process — giving you a faster and ready-to-go environment next time.
Now that ComfyUI and ComfyUI Manager are fully installed and available on your VS Code Server, it's time to launch the interface and start building — let’s dive into that in the next section.
6. Launching ComfyUI from VS Code Server
Once your pod is up and running, and the setup process is complete, you can now interact with your file system using the VS Code Server — accessible via port 8888. Think of this as your personal cloud-based code editor where the entire folder structure lives, including everything related to ComfyUI: models, extensions, custom nodes, and scripts.
📂 This is your home base — from here, you can manage files, tweak settings, and most importantly, start ComfyUI.
Starting ComfyUI with GPU Access
To start ComfyUI, we’ll need to open a terminal within VS Code Server:
-
Click the terminal icon in the top right corner.
-
Set the terminal panel to appear at the bottom for easier access.
Now, in the root directory (you should be inside the /workspace folder), you’ll see a shell script named run_gpu.sh.
To run it, simply type the following command and hit Enter:
ts1./run_gpu.sh
This script will launch ComfyUI with GPU acceleration. Once it finishes initializing, the logs will indicate that the GUI is live on port 8188.
Accessing the ComfyUI Web Interface
With ComfyUI now running, go back to the “Pods” section in your RunPod dashboard. Click the “Connect” button on your active pod, then select HTTP Service → :8188:
💡 Important:
Keep the terminal running! This session handles ComfyUI’s backend and will also display real-time logs for debugging or monitoring purposes.
🎉 That’s it! You now have ComfyUI up and running in your own GPU-powered environment.
Up Next: Running Your First ComfyUI Workflow
In the next section, we’ll walk through downloading the necessary model files and running your first generation using a simple Flux Dev workflow — a great starting point for exploring what ComfyUI can do.
7. Running Your First ComfyUI Workflow
Now that ComfyUI is up and running, it’s time to generate your first image using our preloaded Flux Dev workflow — a great way to get started and understand how things work inside ComfyUI.
Load the Workflow
-
In the ComfyUI interface, click the “Workflows” button from the left sidebar.
-
Look for the flux_dev.json file provided in your setup.
-
Click it to load the workflow — a simple but powerful setup will appear on the canvas.
However, you’ll notice that clicking Run right now triggers an error. That’s because the required model file isn’t downloaded yet.
Download the Flux Dev FP8 Model
To fix that, we’ll need to download the model file manually:
-
Head back to the VS Code Server (port 8888).
-
Navigate to: ComfyUI/models/checkpoints
-
Right-click the checkpoints folder and select “Open in Integrated Terminal.”
-
In the terminal, run the following command to download the flux1-dev-fp8.safetensors:
ts1wget https://huggingface.co/lllyasviel/flux1_dev/resolve/main/flux1-dev-fp8.safetensors
This will download the model file directly into the correct directory.
Since it’s saved to your Network Volume, it’ll persist for future pods too — no need to download again. After successful download please follow the steps below:
🔄 Refresh ComfyUI
-
Back in the ComfyUI Web Interface, go to the top-left and click “Edit” → “Refresh Node Definitions”
(or simply press the R key). -
Now the model will be available in the dropdown under the load checkpoint node.
✅ Run the Workflow
With everything set:
-
Select flux1-dev-fp8.safetensors in the checkpoint loader node.
-
Click Run.
🚀 The model will take a few seconds/minutes to load on the first run (as it’s caching to memory), but future generations will be much faster.
And just like that — your first image, generated using Flux Dev inside your own cloud-hosted ComfyUI environment!
8. Bonus: Monitor Disk Usage in Seconds
As you work with more models, generate outputs, and explore advanced workflows, your storage can quickly start to fill up. To help you stay on top of it, we’ve included a simple yet powerful utility.
Track Your Usage with One Command
Inside your VS Code Server (port 8888), you’ll find a script named: disk_space.sh
It’s located in the root of your workspace (/workspace).
How to Use It
-
Open a terminal inside VS Code Server.
-
Navigate to the root if needed (cd /workspace).
-
Run the script:
ts1./disk_space.sh
This will give you a clear summary of:
-
📦 Total storage used by your container
-
💾 Usage stats for your Network Volume
-
📁 Folder-level breakdown (models, outputs, etc.)
9. Conclusion: You’re All Set!
You’ve successfully spun up a fully cloud-based ComfyUI environment — complete with GPU acceleration, persistent storage, and a modular setup that makes future sessions quick, efficient, and headache-free.
From creating a network volume that safely stores your models and workflows, to deploying a pod with the right GPU and installing ComfyUI with just one click — you now have a streamlined, scalable system for AI image generation in the cloud. You’ve also learned how to run your first workflow, manage your files through VS Code Server, and even monitor your disk usage with a simple command. Whether you're just experimenting or planning to scale up to bigger projects, you're set up with a solid foundation for advanced creative work using ComfyUI.