Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Black Forest Labs (BFL), the startup founded by the creators of the popular Stable Diffusion model, has launched a new image generation model called FLUX.1 Kontext. This model not only generates and edits photos, but also allows users to modify them with both text and other images.
The company also announced its new BFL Playground, where people can try out BFL’s models.
BFL released two versions of the model: FLUX.1 Kontext [pro] and FLUX.1 Kontext [max]. A third version, FLUX.1 Kontext [dev] will be available on private beta. Both the Pro and Max versions are now available on platforms such as KreaAI, Freepik, Lightricks, OpenArt and LeonardoAI.
FLUX.1 Kontext can perform in-context generation. This means the model can be generated from a reference or situation presented to it; it doesn’t generate from scratch.
The company said in a post on X that four things make Kontext “special”:
Character consistency and preserving elements across scenes
Local editing that “targets specific parts without affecting the rest”
Style reference that generates scenes in existing styles, and
Minimal latency
Developers can test use cases and play with the models on the BFL Playground before accessing the full BFL API.
The pro and max models
Enterprises can use the pro version for fast and iterative editing. Users can input both text and reference images and make local edits. The company said Kontext [pro] operates “up to an order of magnitude faster than previous state-of-the-art models” and is one of the first models that allows editing on multiple turns.
On the other hand, FLUX.1 Kontext [max] is the faster version with maximum performance. The company said it adheres more to prompts, makes typography readable and is consistent in edits without compromising speed.
Of course, many other image generation models can also generate photos from uploaded files. MidJourney’s AI image editor can use a reference picture and then edit specific regions of it. So does Adobe’s Firefly, which many people who use Adobe’s popular image and video platforms have access to.
FLUX.1 Kontext [dev], the third version of the Kontext family of models, is an open-weight model at 12 billion parameters.
Generative flow
BFL said FLUX.1 Kontext is a flow model, which gives it more flexibility to accomplish the tasks mentioned above.
Flow models learn from a continuous flow of data and define a path between noisy data and useful information. This differs from diffusion, the model architecture that underpins many image and video generation models from Stability AI, MidJourney and even OpenAI’s Sora, which “denoises” data.
BFL said in a blog post that the Kontext models represent an advancement to flow models.
“FLUX.1 Kontext models go beyond text-to-image,” the company said. “Unlike previous flow models that only allow for pure text-based generation, FLUX.1 Kontext models also understand and can create from existing images. With FLUX.1 Kontext you can modify an input image via simple text instructions, enabling flexible and instant image editing – no need for finetuning or complex editing workflows.”
In the text-to-image benchmark test, BFL claimed the FLUX.1 Kontext models can compete against other models in terms of aesthetics, following prompts, realism and typography.
Generating interest
BFL released the text-to-image model Flux 1.1 Pro in October last year. It also included an API for third-party developers to integrate it into their apps.
Thanks to the BFL Playground, some users have already begun playing around with the Kontext models and report being impressed.
Of course, it still has to compete with other image models available, especially those that have been around for a few years and have continued to improve.