Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Week in Review: Why Anthropic cut access to Windsurf

Google’s PlaNet AI Learns Planning from Pixels

Whitney Cummings: Comedy, Robotics, Neurology, and Love | Lex Fridman Podcast #55

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » OpenAI peels back ChatGPT’s safeguards around image creation
AI Tools & Product Releases

OpenAI peels back ChatGPT’s safeguards around image creation

Advanced AI BotBy Advanced AI BotMarch 28, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This week, OpenAI launched a new image generator in ChatGPT, which quickly went viral for its ability to create Studio Ghibli-style images. Beyond the pastel illustrations, GPT-4o’s native image generator significantly upgrades ChatGPT’s capabilities, improving picture editing, text rendering, and spatial representation.

However, one of the most notable changes OpenAI made this week involves its content moderation policies, which now allow ChatGPT to, upon request, generate images depicting public figures, hateful symbols, and racial features.

OpenAI previously rejected these types of prompts for being too controversial or harmful. But now, the company has “evolved” its approach, according to a blog post published Thursday by OpenAI’s model behavior lead, Joanne Jang.

“We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm,” said Jang. “The goal is to embrace humility: recognizing how much we don’t know, and positioning ourselves to adapt as we learn.”

These adjustments seem to be part of OpenAI’s larger plan to effectively “uncensor” ChatGPT. OpenAI announced in February that it’s starting to change how it trains AI models, with the ultimate goal of letting ChatGPT handle more requests, offer diverse perspectives, and reduce topics the chatbot refuses to work with.

Under the updated policy, ChatGPT can now generate and modify images of Donald Trump, Elon Musk, and other public figures that OpenAI did not previously allow. Jang says OpenAI doesn’t want to be the arbiter of status, choosing who should and shouldn’t be allowed to be generated by ChatGPT. Instead, the company is giving users an opt-out option if they don’t want ChatGPT depicting them.

In a white paper released Tuesday, OpenAI also said it will allow ChatGPT users to “generate hateful symbols,” such as swastikas, in educational or neutral contexts, as long as they don’t “clearly praise or endorse extremist agendas.”

Moreover, OpenAI is changing how it defines “offensive” content. Jang says ChatGPT used to refuse requests around physical characteristics, such as “make this person’s eyes look more Asian” or “make this person heavier.” In TechCrunch’s testing, we found ChatGPT’s new image generator fulfills these types of requests.

Additionally, ChatGPT can now mimic the styles of creative studios — such as Pixar or Studio Ghibli — but still restricts imitating individual living artists’ styles. As TechCrunch previously noted, this could rehash an existing debate around the fair use of copyrighted works in AI training datasets.

It’s worth noting that OpenAI is not completely opening the floodgates to misuse. GPT-4o’s native image generator still refuses a lot of sensitive queries, and in fact, it has more safeguards around generating images of children than DALL-E 3, ChatGPT’s previous AI image generator, according to GPT-4o’s white paper.

But OpenAI is relaxing its guardrails in other areas after years of conservative complaints around alleged AI “censorship” from Silicon Valley companies. Google previously faced backlash for Gemini’s AI image generator, which created multiracial images for queries such as “U.S. founding fathers” and “German soldiers in WWII,” which were obviously inaccurate.

Now, the culture war around AI content moderation may be coming to a head. Earlier this month, Republican Congressman Jim Jordan sent questions to OpenAI, Google, and other tech giants about potential collusion with the Biden administration to censor AI-generated content.

In a previous statement to TechCrunch, OpenAI rejected the idea that its content moderation changes were politically motivated. Rather, the company says the shift reflects a “long-held belief in giving users more control,” and OpenAI’s technology is just now getting good enough to navigate sensitive subjects.

Regardless of its motivation, it’s certainly a good time for OpenAI to be changing its content moderation policies, given the potential for regulatory scrutiny under the Trump administration. Silicon Valley giants like Meta and X have also adopted similar policies, allowing more controversial topics on their platforms.

While OpenAI’s new image generator has only created some viral Studio Ghibli memes so far, it’s unclear what the broader effects of these policies will be. ChatGPT’s recent changes may go over well with the Trump administration, but letting an AI chatbot answer sensitive questions could land OpenAI in hot water soon enough.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLucy Dacus Gives Amedeo Modigliani Shoutout on New Album
Next Article Prado Declares ‘Lost Caravaggio’ a Fake, Triggering Fraud Inquiry
Advanced AI Bot
  • Website

Related Posts

Samsung Teases Z Fold Ultra, Bing Gets AI Video, and Nothing Sets A Date—Your Gear News of the Week

June 7, 2025

AI customer service is leaving us all in the dark

June 7, 2025

This AI video generator is going viral, and it’s completely free to use

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Original Prototype for Jane Birkin’s Hermes Bag Consigned to Sotheby’s

Viral Trump Vs. Musk Feud Ignites A Meme Chain Reaction

Latest Posts

Week in Review: Why Anthropic cut access to Windsurf

June 7, 2025

Google’s PlaNet AI Learns Planning from Pixels

June 7, 2025

Whitney Cummings: Comedy, Robotics, Neurology, and Love | Lex Fridman Podcast #55

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.