Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Nvidia and Snowflake Power Reka AI to Billion-Dollar Heights

OpenAI talks Oracle into another 2M GPUs worth of datacenter • The Register

Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Video Generation

An AI Image Generator’s Exposed Database Reveals What People Really Used It For

By Advanced AI EditorMarch 31, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


As well as CSAM, Fowler says, there were AI-generated pornographic images of adults in the database plus potential “face-swap” images. Among the files, he observed what appeared to be photographs of real people, which were likely used to create “explicit nude or sexual AI-generated images,” he says. “So they were taking real pictures of people and swapping their faces on there,” he claims of some generated images.

When it was live, the GenNomis website allowed explicit AI adult imagery. Many of the images featured on its homepage, and an AI “models” section included sexualized images of women—some were “photorealistic” while others were fully AI-generated or in animated styles. It also included a “NSFW” gallery and “marketplace” where users could share imagery and potentially sell albums of AI-generated photos. The website’s tagline said people could “generate unrestricted” images and videos; a previous version of the site from 2024 said “uncensored images” could be created.

GenNomis’ user policies stated that only “respectful content” is allowed, saying “explicit violence” and hate speech is prohibited. “Child pornography and any other illegal activities are strictly prohibited on GenNomis,” its community guidelines read, saying accounts posting prohibited content would be terminated. (Researchers, victims advocates, journalists, tech companies, and more have largely phased out the phrase “child pornography,” in favor of CSAM, over the last decade).

It is unclear to what extent GenNomis used any moderation tools or systems to prevent or prohibit the creation of AI-generated CSAM. Some users posted to its “community” page last year that they could not generate images of people having sex and that their prompts were blocked for non-sexual “dark humor.” Another account posted on the community page that the “NSFW” content should be addressed, as it “might be looked upon by the feds.”

“If I was able to see those images with nothing more than the URL, that shows me that they’re not taking all the necessary steps to block that content,” Fowler alleges of the database.

Henry Ajder, a deepfake expert and founder of consultancy Latent Space Advisory, says even if the creation of harmful and illegal content was not permitted by the company, the website’s branding—referencing “unrestricted” image creation and a “NSFW” section—indicated there may be a “clear association with intimate content without safety measures.”

Ajder says he is surprised the English-language website was linked to a South Korean entity. Last year the country was plagued by a nonconsensual deepfake “emergency” that targeted girls, before it took measures to combat the wave of deepfake abuse. Ajder says more pressure needs to be put on all parts of the ecosystem that allows nonconsensual imagery to be generated using AI. “The more of this that we see, the more it forces the question onto legislators, onto tech platforms, onto web hosting companies, onto payment providers. All of the people who in some form or another, knowingly or otherwise—mostly unknowingly—are facilitating and enabling this to happen,” he says.

Fowler says the database also exposed files that appeared to include AI prompts. No user data, such as logins or usernames, were included in exposed data, the researcher says. Screenshots of prompts show the use of words such as “tiny,” “girl,” and references to sexual acts between family members. The prompts also contained sexual acts between celebrities.

“It seems to me that the technology has raced ahead of any of the guidelines or controls,” Fowler says. “From a legal standpoint, we all know that child explicit images are illegal, but that didn’t stop the technology from being able to generate those images.”

As generative AI systems have vastly enhanced how easy it is to create and modify images in the past two years, there has been an explosion of AI-generated CSAM. “Webpages containing AI-generated child sexual abuse material have more than quadrupled since 2023, and the photorealism of this horrific content has also leapt in sophistication, says Derek Ray-Hill, the interim CEO of the Internet Watch Foundation (IWF), a UK-based nonprofit that tackles online CSAM.

The IWF has documented how criminals are increasingly creating AI-generated CSAM and developing the methods they use to create it. “It’s currently just too easy for criminals to use AI to generate and distribute sexually explicit content of children at scale and at speed,” Ray-Hill says.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe STEM Degree SCAM: Why I Quit Coding.
Next Article Optimizing Customer Service Management To Charge AI-Driven Growth
Advanced AI Editor
  • Website

Related Posts

Netflix Adopts Runway AI for Video Content Creation, Driving Industry Shift

July 22, 2025

How To Hide AI Images From Online Searches With DuckDuckGo

July 22, 2025

This AI Paper from Alibaba Introduces Lumos-1: A Unified Autoregressive Video Generator Leveraging MM-RoPE and AR-DF for Efficient Spatiotemporal Modeling

July 21, 2025
Leave A Reply

Latest Posts

3,800-Year-Old Warrior’s Tomb Unearthed in Azerbaijan

Removed Romanesque Murals Must Be Returned to Sijena Monastery

President Trump Withdraws US from UNESCO

Morning Links for July 22, 2025

Latest Posts

Nvidia and Snowflake Power Reka AI to Billion-Dollar Heights

July 23, 2025

OpenAI talks Oracle into another 2M GPUs worth of datacenter • The Register

July 23, 2025

Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

July 22, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Nvidia and Snowflake Power Reka AI to Billion-Dollar Heights
  • OpenAI talks Oracle into another 2M GPUs worth of datacenter • The Register
  • Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber
  • OpenAI agreed to pay Oracle $30B a year for data center services
  • Inside the Conference Shaping Frontier AI for Science

Recent Comments

  1. binance on OpenAI DALL-E: Fighter Jet For The Mind! ✈️
  2. JeffreyCoalo on Local gov’t reps say they look forward to working with Thomas
  3. Duanepiems on Orange County Museum of Art Discusses Merger with UC Irvine
  4. fpmarkGoods on How Cursor and Claude Are Developing AI Coding Tools Together
  5. avenue17 on Local gov’t reps say they look forward to working with Thomas

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.