FLUX.1 Kontext [dev] is a 12-billion-parameter image editing model that transforms visuals based on natural language instructions. It allows highly consistent, multi-step edits and is released with open weights under a non-commercial license to empower artists and researchers.
FLUX.1 Kontext [dev] is a 12-billion-parameter image editing model that transforms visuals based on natural language instructions. It allows highly consistent, multi-step edits and is released with open weights under a non-commercial license to empower artists and researchers.
Please upload an image file
text prompt
Num Images
number of images to generate (Default: 1, 1 ≤ num_images ≤ 4)
Num Inference Steps
number of denoising steps (Default: 25, 1 ≤ num_inference_steps ≤ 50)
Guidance Scale
classifier-free guidance, higher means follow prompt more closely (Default: 2.5, 0 ≤ guidance_scale ≤ 20)
random seed, empty means random (Default: empty, 0 ≤ seed < 4294967296)
image width in px (Default: 1024, 128 ≤ width ≤ 1920)
image height in px (Default: 1024, 128 ≤ height ≤ 1920)
You need to login to use this model
LoginFLUX.1 Kontext [dev]
is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions.
For more information, please read our blog post and our technical report. You can find information about the [pro]
version in here.
FLUX.1 Kontext [dev]
more efficient.We provide a reference implementation of FLUX.1 Kontext [dev]
, as well as sampling code, in a dedicated github repository.
Developers and creatives looking to build on top of FLUX.1 Kontext [dev]
are encouraged to use this as a starting point.
FLUX.1 Kontext [dev]
is also available in both ComfyUI and Diffusers.
The FLUX.1 Kontext models are also available via API from the following sources
# Install diffusers from the main branch until future stable release
pip install git+https://github.com/huggingface/diffusers.git
Image editing:
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
pipe = FluxKontextPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(
image=input_image,
prompt="Add a hat to the cat",
guidance_scale=2.5
).images[0]
Flux Kontext comes with an integrity checker, which should be run after the image generation step. To run the safety checker, install the official repository from black-forest-labs/flux and add the following code:
import torch
import numpy as np
from flux.content_filters import PixtralContentFilter
integrity_checker = PixtralContentFilter(torch.device("cuda"))
image_ = np.array(image) / 255.0
image_ = 2 * image_ - 1
image_ = torch.from_numpy(image_).to("cuda", dtype=torch.float32).unsqueeze(0).permute(0, 3, 1, 2)
if integrity_checker.test_image(image_):
raise ValueError("Your image has been flagged. Choose another prompt/image or try again.")
For VRAM saving measures and speed ups check out the diffusers docs
Black Forest Labs is committed to the responsible development of generative AI technology. Prior to releasing FLUX.1 Kontext, we evaluated and mitigated a number of risks in our models and services, including the generation of unlawful content. We implemented a series of pre-release mitigations to help prevent misuse by third parties, with additional post-release mitigations to help address residual risks:
This model falls under the FLUX.1 Non-Commercial License.
@misc{labs2025flux1kontextflowmatching,
title={FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space}, Add commentMore actions
author={Black Forest Labs and Stephen Batifol and Andreas Blattmann and Frederic Boesel and Saksham Consul and Cyril Diagne and Tim Dockhorn and Jack English and Zion English and Patrick Esser and Sumith Kulal and Kyle Lacey and Yam Levi and Cheng Li and Dominik Lorenz and Jonas Müller and Dustin Podell and Robin Rombach and Harry Saini and Axel Sauer and Luke Smith},
year={2025},
eprint={2506.15742},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2506.15742},
}
Run models at scale with our fully managed GPU infrastructure, delivering enterprise-grade uptime at the industry's best rates.