How to Train a Flux LoRA with FluxGym on RunPod

Table of Contents
- 1. Introduction FluxGym LoRA Training
- 2. Create and Fund Your RunPod Account
- 3. Deploying the FluxGym Template on RunPod
- 4. Verifying the Pod & Initializing FluxGym
- 5. Launching FluxGym UI from VS Code Server
- 6. Character Concept and Initial LoRA Setup
- 7. Preparing Your Dataset — Quality and Variety Matter
- 8. Uploading & Captioning Your Dataset in FluxGym
- 9. Adjusting Advanced Settings Before Training
- 10. Starting Your Training — Hit “Train” and Let It Run
- 11. Testing Your LoRA in ComfyUI
- 12. Conclusion Flux LoRA Training
1. Introduction FluxGym LoRA Training
Creating consistent AI characters is essential for visual storytelling, whether you're developing a game, building a brand, or refining your creative style. One of the most reliable ways to do this is by training a Flux LoRA, a lightweight model fine-tuned to capture your character’s unique features and vibe. To simplify the process, we use FluxGym, a user-friendly training interface made specifically for training LoRAs without needing deep technical skills. The training runs on RunPod, a cloud GPU platform where we’ve set up a ready-to-use template, making it fast and accessible to anyone.
In this tutorial, you'll learn how to launch FluxGym on RunPod, prepare your dataset, train your Flux LoRA and finally test it in ComfyUI using a custom workflow. Everything you need to create consistent AI characters, step by step.
2. Create and Fund Your RunPod Account
To get started with training Flux LoRAs using FluxGym on RunPod, first create a RunPod account. Visit the RunPod website and sign up using your preferred email or social login.
Once your account is active, you’ll need to add funds to deploy GPU pods for training. We recommend an initial deposit of at least $10 to cover your compute and storage needs during the process. You can easily add funds in the Billing section of your RunPod dashboard.
💡 Tip: This initial balance ensures uninterrupted access to GPU resources during your training sessions.
With your account funded, you’re ready to launch a GPU pod and set up FluxGym to begin training your Flux LoRA models.
3. Deploying the FluxGym Template on RunPod
Now that your RunPod account is funded, it’s time to deploy a GPU pod using the Next Diffusion – FluxGym template. This custom template sets up FluxGym quickly and efficiently on your pod, so you can start training your Flux LoRA models without delay. Follow the steps below to get started:
Pod Deployment & GPU Selection
-
Head over to the Pods section in the RunPod dashboard.
-
Because we are not using persistent storage or network volumes in this tutorial, you can choose either Community Cloud or Secure Cloud from the top navbar. Community Cloud offers quicker access and generally lower costs, while Secure Cloud provides better GPU infrastructure at a higher price. For this guide, we will select Community Cloud to keep costs lower and the setup simpler.
-
Choose the GPU you want from the available list. For the best training performance, we recommend using an RTX 4090 or a similarly powerful GPU.
💡 Tip: To save costs during initial setup or testing, feel free to select a lower-tier GPU. However, for serious training workloads, a high-performance GPU (24GB VRAM) will significantly reduce training time and is recommended.
Selecting the Next Diffusion - FluxGym Template
After selecting your preferred GPU, scroll down to the template options. Now it’s time to pick the right Docker Image (FluxGym template) for your pod.
-
Click Change Template
-
Look for Next Diffusion – FluxGym (if you cannot find it, please make sure to click the following link to automatically select the template: Next Diffusion – FluxGym)
-
Select it
For the final settings, set GPU Count to 1, and choose On-Demand pricing for flexible, pay-as-you-go usage. At the time of writing, an RTX 4090 On-Demand pod costs around $0.34 per hour on the Community Cloud, offering an excellent balance of performance and price.
Launch Your Pod
Scroll to the bottom and click Deploy On-Demand to initialize your pod.
You’ll be redirected to the My Pods section where your selected GPU instance will begin spinning up with the FluxGym template.
4. Verifying the Pod & Initializing FluxGym
After deploying your pod, head over to the Pods section in the RunPod sidebar. You’ll see your pod listed, usually showing a status like Initializing or Starting.
Checking Initialization Logs
-
Click to expand your pod’s panel and watch the live logs as the system pulls and extracts the Next Diffusion – FluxGym Docker image. This process can take a few minutes and should look something like this:
-
Once the Docker image finishes downloading and extracting, click on the Logs button to follow the live setup progress. Here, you’ll see detailed messages from the FluxGym template as it configures your training environment. Here’s an example of what the logs look like when everything goes smoothly:
Accessing the VS Code Server
Once the setup is complete and logs show everything went smoothly, it's time to access your development environment.
-
Close the Logs panel.
-
Click the Connect button on your pod.
-
Select HTTP Service → :8888 (VS Code Server).
This will launch a browser-based Visual Studio Code environment, preloaded with the FluxGym file system already set up inside the container. Since FluxGym is baked directly into the Docker image, there's no need for manual installation—everything you need is already in place and ready to go.
Now that you're inside the VS Code environment, it's time to launch the actual FluxGym UI and start exploring your training workspace. Let’s dive into that next. 👇
5. Launching FluxGym UI from VS Code Server
Once your pod is up and running and the setup has completed successfully, it’s time to interact with your file system through the VS Code Server, accessible via port 8888.
Think of this as your personal, cloud-based code editor — a familiar interface where everything related to FluxGym lives. From here, you can browse the full folder structure, access training scripts, check logs, and launch your training interface.
This is your base of operations for training Flux LoRAs — let’s dive in and get FluxGym up and running.
Starting FluxGym with GPU Access
To start FluxGym:
-
Click the terminal icon in the top-right corner of the VS Code window.
-
The terminal panel will open at the bottom. You should already be inside the /workspace directory.
-
In this folder, you'll see a script named run_fluxgym.sh.
Run the following command and hit enter:
ts1./run_fluxgym.sh
This script will launch the FluxGym backend with GPU acceleration. It may take a moment to start up — during this time, the system initializes all components and prepares the UI.
Once everything is ready, the terminal will display a message indicating that FluxGym is live on port 7860.
Accessing the FluxGym Web Interface
To open the FluxGym UI:
-
Go back to your Pods section in the RunPod dashboard.
-
Click Connect on your running pod.
-
Select HTTP Service → :7860.
The FluxGym interface will open in a new browser tab — this is where you’ll load datasets, configure LoRA parameters, and start training your models.
💡 Important: Keep the terminal running! This session handles FluxGym’s backend and will also display real-time logs for debugging or monitoring purposes.
🎉 That’s it — you now have FluxGym running on a powerful GPU pod, ready to train your first Flux LoRA.
Before jumping into the actual training, the first crucial step is to clearly define the character you want to bring to life and configure your LoRA’s basic settings. This foundational work will guide the entire process, ensuring your model reflects the vibe, personality, and aesthetic you envision.
6. Character Concept and Initial LoRA Setup
Before jumping into parameters and dataset preparations, take a moment to visualize the character you want to bring to life. Think about the vibe, personality, and aesthetic you’re aiming for. This model will reflect those choices — so it helps to be intentional.
In my case, I already have a clear concept:
A tall, striking woman with icy blue almond-shaped eyes, flawless porcelain skin, a sleek platinum bob, and cherry red lips — a character that feels both timeless and high fashion, like she belongs in a modern cinematic campaign or luxury brand lookbook.
To give this LoRA a unique identity and make it easy to activate later during inference, I’ll use a custom trigger word: ch3rrybl0nde — short, memorable, and thematically spot-on. With that vision locked in, it’s time to move on to the training settings and setup.
Step 1: LoRA Setup
Once your character is defined, the first step in FluxGym is to configure the LoRA’s basic settings.
Head to the Step 1 panel in the UI and fill in the following:
-
LoRA Name: ch3rrybl0nde
(This must be unique. It’s the name that identifies your project.) -
Trigger Word: ch3rrybl0nde
(This is what you’ll use in prompts to evoke the trained character.) -
Base Model: flux-dev
(We’re training for FluxDev, so we select it here to match the model architecture.) -
VRAM: 20G
(Since we’re running on an RTX 4090 with 24GB VRAM, we select the highest available option to maximize performance.)
These fields form the foundation of your LoRA project. With your name, trigger word, model, and VRAM selected, you're now ready to move on to Step 2, where the real magic begins: preparing and uploading your dataset.
Before you go, you’ll notice several other training settings still within Step 1:
-
Repeat trains per image
-
Max train epochs
-
Expected training steps
-
Sample image prompts
-
Sample image every N steps
-
Resize dataset images
Tip: You can safely leave these defaults unchanged for now. We’ll revisit them after your dataset is uploaded and captioned in Step 2.
Next Up: Building Your Dataset
In the next section, we’ll cover how to prepare your image dataset, organize it for training, and use Florence 2 for automated captioning — all essential steps for achieving a strong, style-consistent LoRA.
7. Preparing Your Dataset — Quality and Variety Matter
A well-curated dataset is crucial to training a strong, character-based LoRA. A good starting point is to gather 15 to 20 high-quality images of your character that capture their unique look and personality. To provide the model with enough visual information, aim for a diverse range of shots.
A balanced dataset might include:
-
5 close-ups focusing on different facial expressions and key features like eyes and lips,
-
5 upper body shots showcasing posture, styling, and attire,
-
5 shots from different angles to highlight the character’s features from various perspectives,
-
5 full-body images capturing the overall silhouette and style.
Curating Your Dataset for 1024×1024 Training
We’ve chosen to generate most images at 1024×1024 pixels and will resize all dataset images to this resolution. If you include images with different dimensions, make sure the character is centered in the frame so cropping works correctly without cutting off important details.
Below is the dataset I’m currently working with:
Boost Dataset Variety with Flux Kontext Dev
To expand and diversify your dataset efficiently, you can use Flux Kontext Dev — an image-to-image model that lets you input, for example, a close-up shot of your character and generate variations with different angles, expressions, or lighting. This is a powerful way to increase dataset variety without hunting for many original photos or create ones from scratch.
Below is an example screenshot of the workflow showing how an initial image can be transformed into a new one with just a simple prompt.
For a step-by-step walkthrough on using Flux Kontext Dev within ComfyUI, check out our dedicated tutorial:
With your dataset now prepared and curated, it’s time to upload it into the FluxGym Web UI and begin the captioning process. Let’s move on to the next step.
8. Uploading & Captioning Your Dataset in FluxGym
In Step 2 of the FluxGym Web UI, upload your prepared dataset by dragging in your images or selecting them manually. Once uploaded, generate initial captions using the “Add AI Caption with Florence-2” button.
Since we defined the trigger word ch3rrybl0nde in Step 1, each caption will automatically start with it. For example:
**Too verbose: "**ch3rrybl0nde a woman with blonde hair wearing a red dress, close-up portrait, looking slightly to the side"
This repeats details already implied by the trigger word.
**Better: "**ch3rrybl0nde, close-up portrait, soft smile, looking slightly to the side, natural light, white background"
This structure gives the model clear visual signals — like pose, mood, framing, and lighting — without redundancy.
After captioning, take a moment to review and clean up each line. Focus on expression, body position, clothing style, setting, and camera angle. Adding specific cues like “waist-up shot,” “looking over shoulder,” or “studio lighting” helps guide the model toward consistent, accurate generations later.
💡 Clean captions = cleaner style retention, better pose control, and fewer training artifacts.
Revisiting Step 1: Adjust Training Settings
Now that your dataset is captioned and refined, it’s time to return to Step 1 in the FluxGym UI to configure the remaining training settings. Earlier, we skipped some of these — now is the perfect moment to set them up with purpose.
Below are the recommended settings tailored for a character-based LoRA trained on ~15–20 high-quality images. These values balance quality, efficiency, and training stability.
Recommended Training Settings
-
Repeat trains per image: 10
Repeats each image 10 times per epoch — a balanced default for small datasets that ensures strong learning without overfitting too fast. -
Max train epochs: 12
Enough training cycles to fully learn character details without drifting. Ideal for ~15–25 image datasets. -
Expected training steps: (leave blank or let FluxGym auto-calculate)
No need to fill this unless you have a very specific target. FluxGym will estimate it based on your dataset and settings. -
Sample image prompts:
Add 2–3 prompt variations that reflect how you’ll prompt your LoRA later during generation. Here’s what we’re using:-
ch3rrybl0nde, close-up portrait, soft expression, lace bra, natural window light, shallow depth of field
-
ch3rrybl0nde, waist-up shot, elegant silk blouse with deep neckline, golden hour lighting, vintage interior background
-
ch3rrybl0nde, upper body, strapless beige corset, dramatic studio lighting, neutral gray backdrop
-
-
Sample image every N steps: 400
FluxGym will generate one sample preview every 400 training steps — useful for visually tracking progress. -
Resize dataset images: 1024
Makes sure all images match the target resolution of 1024×1024, which is what you’ve used for dataset generation.
📸 Below is an example of how this looks in the FluxGym UI after entering your settings.
9. Adjusting Advanced Settings Before Training
Before hitting the Train button, let’s quickly optimize two important values in the Advanced Settings dropdown (found beneath the main steps in the FluxGym UI). These settings aren't required to change — but if you're aiming for high-quality results, adjusting them can make a big difference.
⚙️ --network_dim (LoRA Rank)
What it does:
This controls the internal dimension (or rank) of the LoRA adapters — in simple terms, how much "space" the model has to learn about your character. A higher rank means more parameters, which helps capture fine detail, nuanced personality, and visual style.
-
Default (4): Lightweight and fast, but may underfit complex visuals or subtle traits.
-
Our choice (16): ✅ Boosting this to 16 gives your LoRA more expressive power, which is especially useful when you're working with:
-
Fashion-forward, aesthetic-driven characters
-
Subtle changes in lighting, pose, or facial expression
-
Higher visual fidelity during generation
-
💡 Tradeoff: Slightly longer training time and increased VRAM usage — but completely worth it if you’re on a powerful GPU like the 4090.
⚙️ --learning_rate (Learning Rate)
What it does:
This defines how quickly your model learns. A higher learning rate trains faster, but increases the risk of instability or overfitting. A lower rate slows things down — and that’s a good thing when training with a small but carefully curated dataset.
-
Default (8e-4): Fast, but may overshoot important visual details or cause artifacts.
-
Our choice (5e-4): ✅ Lowering the learning rate slightly helps the model learn more steadily:
-
Preserves fine-grained visual features (like bone structure, makeup, or texture)
-
Reduces risk of early overfitting on small datasets (15–25 images)
-
Helps generalize better across your dataset’s poses, lighting, and outfits
-
Summary of Your Adjustments
Setting | Old Value | New Value | Why It’s Better |
---|---|---|---|
--network_dim | 4 | 16 | More expressive power for learning fine details |
--learning_rate | 8e-4 | 5e-4 | Smoother, more stable learning on small dataset |
With these two tweaks, your training setup is now optimized for high fidelity and consistency — exactly what we want for a stylish, character-focused LoRA. Let’s move on to launching the actual training process.
10. Starting Your Training — Hit “Train” and Let It Run
Now that you’ve fine-tuned your advanced settings, it’s time to start the actual training process.
Simply go to Step 3 in the FluxGym UI and click the “Train” button. This will kick off the training with all your configured settings.
A couple things to keep in mind:
-
First-time delay: When you hit “Train” for the very first time, FluxGym will automatically download the base Flux Dev model in the background. This can take a while — especially if your internet is slower or the model is large — so be patient, and don’t worry if there’s no visible progress for a bit.
In our case, the first time we clicked Train, we encountered a temporary “connection error” after the Flux Dev model had downloaded. Luckily, the model is cached locally after download — so we simply restarted the pod (without terminating it) and training worked perfectly on the second attempt.
-
Training time: For this guide, we’re training over 2400 steps, which typically takes about 1.5 hours on a powerful GPU like the RTX 4090. Your actual duration may vary depending on your system and batch size — and that’s okay!
-
FluxGym may look idle: Sometimes during the early stages of training, the UI may appear to freeze or make no visible updates. That’s totally normal — it’s just crunching through your data behind the scenes. Be patient and let it cook.
Once training starts, you can monitor progress via preview images and logs in the UI. Relax and let the model learn your character’s unique style!
Once training is underway, you’ll see preview images appearing at your defined interval (we set it to every 400 steps). These are incredibly useful for tracking how well your LoRA is learning — you’ll be surprised how quickly the style begins to emerge.
After Training Finishes:
Once training is complete, your generated .safetensors LoRA files will be saved inside the outputs folder with the name of the LoRA we've used. In our case, that's:
ts1 2workspace/fluxgym/outputs/ch3rrybl0nde
Because we configured the advanced setting to save a LoRA checkpoint every 4 epochs, and trained for 12 total epochs, you’ll now find three .safetensors files waiting for you — each representing a snapshot of the model’s learning progress.
You can now take those files and test them directly in ComfyUI to see which one gives the most faithful and beautiful results. Exciting, right? Let's try it out next!
11. Testing Your LoRA in ComfyUI
Now that your training is complete, it’s time to see how your LoRA performs in the wild — by testing it directly in ComfyUI.
1. Load the Flux Dev LoRA Workflow
Start by loading the pre-built Flux Dev FP8 LoRA Workflow:
👉 Download the provided Flux Dev FP8 Lora Workflow JSON file workflow file and drag it into your ComfyUI canvas.
2. Insert Your Trained LoRA
Find the LoRA loader node in the workflow and load your trained .safetensors file. Make sure to place your LoRA file inside the ComfyUI/models/loras folder first.
Example:
ch3rrybl0nde-000012.safetensors
This file is the final checkpoint saved after 12 epochs. You can also try earlier versions like -000004 or -000008 to compare results and choose the best one.
3. Trigger Word & Prompt
Set the input trigger word (e.g., ch3rrybl0nde) and enter a prompt to test your LoRA. While prompts similar to your training data work well, feel free to try different styles or settings. Then hit Run and watch ComfyUI generate your first LoRA-powered image!
Prompt: ch3rrybl0nde, medium shot, wearing a black pinstripe blazer left unbuttoned over a lace bralette, layered gold chains and wide sunglasses, seated on the yacht’s leather bench with legs crossed, staring directly into the lens.
12. Conclusion Flux LoRA Training
You’ve now completed the full pipeline — from concept to creation — and built a high-quality, stylish LoRA using FluxGym on RunPod. You learned how to fund and deploy your pod, launch the FluxGym UI, and define your character’s identity with a custom trigger word. You prepped a clean, expressive dataset with strong visual variety, captioned it with precision, and tuned advanced training settings to squeeze out every bit of fidelity. From there, you kicked off training, monitored the process, and finally brought your LoRA to life inside ComfyUI using a sleek, prebuilt workflow. Along the way, you’ve seen how every step — from smart captions to curated image variety to careful learning rates — feeds into cleaner, sharper, and more expressive generations.
This workflow isn’t just about making models — it’s about defining your own visual language. And now, with your first character LoRA trained, tested, and generating with control and consistency, you’ve got the foundation to go further. Whether you’re creating iconic fashion-forward muses, stylized avatars, or entire character lineups, the tools are in your hands. So keep refining, keep prompting, and keep building. This is how standout styles are made.