
Pros
Fast generation time
Great editing tools
Very creative
Cons
Complicated availability
Too realistic product images
If you’ve heard of AI image generation, you’ve probably heard of Stable Diffusion. Named for a family of AI creative models, the original Stable Diffusion model was released in 2022 as the result of a collaboration between researchers from Stability AI, Runway and the University of Munich, with support from European AI research and data nonprofits. It quickly found a loyal fanbase of AI enthusiasts who compared it to its main competitor at the time, Midjourney. In the years since its initial launch, tech giants including OpenAI, Adobe and Canva have all released their own popular AI image models.
But Stable Diffusion models have one key difference from all the others: They’re open source.
Open-source AI models let anyone take a peek behind the scenes at how the model works and adapt them to their own purposes. That means there are a lot of different ways to use Stable Diffusion models. I’m not a coding wizard, so I opted not to license or download the models to run locally on my computer. A quick Google search brought up a lot of websites that host SD models, but I wanted the true Stable Diffusion experience. That led me to DreamStudio and Stable Assistant. Both of these are freemium web apps by Stability AI that let you easily create AI images, and I used both. Ultimately, I preferred Stable Assistant, but my experience using both programs showed me why Stable Diffusion models have stayed a household name, even as the people behind the models have had a rocky path.
The images I created with Stability AI were creative and detailed. Where the company shines is in its editing capabilities. Stable Assistant has the most comprehensive, hands-on editing suite of any AI image generator I’ve tested, without the overwhelming, overly detailed nature of a Photoshop-like professional program. The Stable Image Ultra model is artistically capable, like Midjourney and Leonardo.Ai. If you’re trying to decide between the three competitors, it’s probably going to come down to cost and potential commercialization requirements.
Stable Assistant is great for people who need to produce a lot of AI imagery quickly and for amateur creators looking to level up their skills and refine their design ideas. DreamStudio will remind you of a more traditional AI image generator, great for budget-conscious, occasional AI creators. For professional creators, Stable Diffusion models are capable, but businesses will need to worry about licensing requirements.
Here’s how the newest Stable Diffusion model, Stable Image Ultra, held up in my tests, including how well it matched my prompts, response speed and creativity.
How CNET tests AI image generators
CNET takes a practical approach to reviewing AI image generators. Our goal is to determine how good it is relative to the competition and which purposes it serves best. To do that, we give the AI prompts based on real-world use cases, such as rendering in a particular style, combining elements into a single image and handling lengthier descriptions. We score the image generators on a 10-point scale that considers factors such as how well images match prompts, creativity of results and response speed. See how we test AI for more.
The easiest way to access Stable Diffusion models is through Stability AI’s Stable Assistant and DreamStudio. After a free three-day trial, there are four subscription options for Stable Assistant: Standard ($9 a month for 900 credits), pro ($19 a month for 1,900 credits), plus ($49 a month for 5,500 credits) and premium ($99 a month for 12,000 credits). I used the lowest tier, and after generating 75 images, I still had about 418 credits left. You also get access to Stability’s AI video, 3D model and audio models with these plans.
You can also access Stable Diffusion models using DreamStudio. You can initially play around with 100 free credits, then you’ll need to upgrade. You can get the basic plan for $12 a month (1,200 credits) or the plus plan for $29 a month (2,900 credits).
Stability AI can use the information and files you provide in your prompts (inputs) and the results it generates (outputs) for training its AI, as outlined in the terms of service and privacy policy. You can opt out in Stable Assistant by going to Profile > Settings > Disable training and history. In Dream Studio, you can go Settings > User preferences > Training: Improve model for everyone and toggle that off. You can learn more about opting out in Stability’s privacy center.
How good are the images, and how well do they match prompts?
Stability was able to create a variety of images in many different styles. I created dramatic fantasy scenes, cute cartoon dinosaurs and photorealistic forest landscapes, all of which the program handled well. It reminded me a lot of the quality of other art-centric AI programs like Midjourney and Leonardo.Ai — finely detailed and creative. It had decent prompt adherence, which means it produced the images I asked for.
This is one of my favorite Stability AI images. My prompt was inspired by the song Doomsday by Lizzie McAlpine.
Like a lot of AI companies, Stability struggles with coherent text generation. Even telling Stable Assistant exactly what words I wanted to appear on the image couldn’t get them to always populate correctly. DreamStudio was better, but the text was still childlike and didn’t match the images’ aesthetic.
Stability also produced some of the most convincing AI images of products I’ve seen, second only to OpenAI. I asked Stability to create stock imagery for an iPhone, a pair of Ray-Ban sunglasses and a Hydroflask water bottle, and the results were surprisingly realistic.
If you don’t look too closely, these all look like they could be on each retailer’s website.
Requests for brand names, logos and celebrities’ likenesses are typically shot down by AI image generators since they’re protected content or sometimes go against a company’s AI usage guidelines. I asked the chatbot if it was allowed to create brand names and logos. It replied: “I can create images that resemble well-known products and logos, but I cannot create exact replicas of copyrighted or trademarked materials.”
I was surprised not just to have my prompts with brand names go ahead, but for the results to be so good. One reason it may be able to produce these results is because of its training data and processes. Like the majority of AI companies, Stability’s training datasets aren’t public. Stability is currently being sued in a class action lawsuit where artists allege the company is infringing on their copyrighted work. Getty Images is also suing Stability, alleging that the company used 12 million photos from its collection without permission or payment. I strongly advise you not to create AI images that could potentially infringe on copyrighted material or replicate a real person’s likeness.
How engaging are the images?
The images were engaging and often colorfully vivid. Using the upscaling tool was helpful for refining small details and making images more engaging. Images made with Stable Assistant and DreamStudio aren’t watermarked, so make sure you disclose their AI origins when you share them.
Can you fine-tune results?
The best part of using Stability is its many editing tools. Its chatbot Stable Assistant has the most editing controls of any AI creative program I’ve tested, which is saying something. All the usual suspects were present in Stable Assistant and DreamStudio, including the ability to add, remove and replace objects and the image’s background. You also have two ways to upscale to higher resolutions, which is great. But where Stable Assistant goes above and beyond is with its additional editing toolkit, which lets you recolor specific objects and create similar variations based on your image’s structure or style. Plus you can apply a new style.
I used the search and recolor tool to create different variations of iris and eyeliner color from the same base image (left).
You can also just send follow-up editing requests in a regular message, like with OpenAI’s conversational image generators. You can also use your AI image as a base for a new AI video or 3D model, a nice perk that’s icing on the cake.
Speaking of icing, it’s worth noting that Stable Assistant’s chat-to-edit function was hit-or-miss. This doesn’t matter as much with other tools available to help tweak your images, but this example of a vanilla-and-chocolate cake illustrates how it can mess up.
Stability and I have different definitions of what constitutes icing.
I always encourage people to use style references when they have the chance, and Stability’s was decent.
You can see how Stable Assistant maintained the color scheme and general vibe of my original photo (left) when I asked for a new image of a couple on a lake (right).
But if you’re looking to AI-ify an image or use AI to change the style of an existing image, you’re out of luck.
All I wanted was a cartoon version of this guacamole snap I took. Instead, Stability gave me a new version of my previous prompt asking for a forest. Why it made the deer out of tortilla chips, I don’t know.
With so many editing tools, I was initially worried about a quantity-over-quality issue. I got every tool to work at some point, but there were times when the features lacked the specificity and fine-detailed scale I would expect from a more professional program. Like with any AI service, the best way to take advantage of the many editing tools it offers is to spend some time with all of them. It’s a learning curve, figuring out what tools will work best in what scenario. For me, playing around with Stability’s editing tools was the best part of my reviewing process.
How fast do images arrive?
Stability was relatively quick, popping out images in 30 to 60 seconds. Stable Assistant only generates one image per prompt, which definitely helps speed things up. DreamStudio lets you generate up to four images at a time. I prefer when AI image generators give me multiple variations, so DreamStudio was great for that.
Dramatic ballerinas are one of my favorite tests for AI image generators, and Stability succeeded.
I’m impressed with Stable Diffusion. But I still have concerns
Overall, I was impressed with the creativity, detail and speed of the AI images Stability produced. Stability’s raw AI images weren’t immune to the hallucinations and errors that plague AI images. There are definitely things I wouldn’t use Stability for, like text-heavy imagery. But the sign of a great AI image generator is whether the program offers you tools to fix those mistakes. This is where Stability shines, especially in Stable Assistant, and its editing suite clearly outpaces the competition.
But I’m not without concerns. First, it was ridiculously confusing to figure out the best way to use the Stable Diffusion models, whether through Stable Assistant, DreamStudio or third-party platforms. A lot of the user interface settings I wanted in Stable Assistant were available in DreamStudio (like a main library and the ability to select what AI model you wanted to use). But DreamStudio doesn’t have all of the editing tools that I enjoyed and used in Stable Assistant. I’m also concerned that the most recent AI SD model underlying both programs, Stable Image Ultra, is a little too good at recognizing and replicating brand-name characters, logos and products.
In the future, I would love to see Stability AI more clearly address the differences between Stable Assistant and DreamStudio. I also think future model updates can learn some from OpenAI about legible text generation in AI images. These simple changes would take the frustration out of using what is ultimately a capable AI creative system.