Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

IBM: Shadow AI breaches cost $670K more, 97% of firms lack controls

How an MIT program planted the roots for Open Range and continues to shape Omaha’s startup ecosystem

Meta to spend up to $72B on AI infrastructure in 2025 as compute arms race escalates

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
MIT CSAIL

MIT CSAIL unveils PhotoGuard, an AI defense against unauthorized image manipulation

By Advanced AI EditorJuly 30, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

In recent years, large diffusion models such as DALL-E 2 and Stable Diffusion have gained recognition for their capacity to generate high-quality, photorealistic images and their ability to perform various image synthesis and editing tasks. 

But concerns are arising about the potential misuse of user-friendly generative AI models, which can enable the creation of inappropriate or harmful digital content. For example, malicious actors might exploit publicly shared photos of individuals by utilizing an off-the-shelf diffusion model to edit them with harmful intent.

To tackle the mounting challenges surrounding unauthorized image manipulation, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced “PhotoGuard,” an AI tool designed to combat advanced gen AI models like DALL-E and Midjourney.

Fortifying images before uploading

In the research paper “Raising the Cost of Malicious AI-Powered Image Editing,” the researchers claim that PhotoGuard can detect imperceptible “perturbations” (disturbance or irregularity) in pixel values, which are invisible to the human eye but detectable by computer models.

The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF

“Our tool aims to ‘fortify’ images before uploading to the internet, ensuring resistance against AI-powered manipulation attempts,” Hadi Salman, MIT CSAIL doctorate student and paper lead author, told VentureBeat. “In our proof-of-concept paper, we focus on manipulation using the most popular class of AI models currently employed for image alteration. This resilience is established by incorporating subtly crafted, imperceptible perturbations to the pixels of the image to be protected. These perturbations are crafted to disrupt the functioning of the AI model driving the attempted manipulation.”

According to MIT CSAIL researchers, the AI employs two distinct “attack” methods to create perturbations: encoder and diffusion. 

The “encoder” attack focuses on the image’s latent representation within the AI model, causing the model to perceive the image as random and rendering image manipulation nearly impossible. Likewise, the “diffusion” attack is a more sophisticated approach and involves determining a target image and optimizing perturbations to make the generated image closely resemble the target.

Adversarial perturbations

Salman explained that the key mechanism employed in its AI is ‘adversarial perturbations.’

“Such perturbations are imperceptible modifications of the pixels of the image that have proven to be exceptionally effective in manipulating the behavior of machine learning models,” he said. “PhotoGuard uses these perturbations to manipulate the AI model processing the protected image into producing unrealistic or nonsensical edits.”

A team of MIT CSAIL graduate students and lead authors — including Alaa Khaddaj, Guillaume Leclerc and Andrew Ilyas —contributed to the research paper alongside Salman. 

The work was also presented at the International Conference on Machine Learning in July and was partially supported by National Science Foundation grants at Open Philanthropy and Defense Advanced Research Projects Agency.

Using AI as a defense against AI-based image manipulation

Salman said that although AI-powered generative models such as DALL-E and Midjourney have gained prominence due to their capability to create hyper-realistic images from simple text descriptions, the growing risks of misuse have also become evident. 

These models enable users to generate highly detailed and realistic images, opening up possibilities for innocent and malicious applications.

Salman warned that fraudulent image manipulation can influence market trends and public sentiment in addition to posing risks to personal images. Inappropriately altered pictures can be exploited for blackmail, leading to substantial financial implications on a larger scale.

Although watermarking has shown promise as a solution, Salman emphasized the necessity for a preemptive measure to proactively prevent misuse remains critical. 

“At a high level, one can think of this approach as an ‘immunization’ that lowers the risk of these images being maliciously manipulated using AI — one that can be considered a complementary strategy to detection or watermarking techniques,” Salman explained. “Importantly, the latter techniques are designed to identify falsified images once they have been already created. However, PhotoGuard aims to prevent such alteration to begin with.”

Changes imperceptible to humans

PhotoGuard alters selected pixels in an image to enable the AI’s ability to comprehend the image, he explained.  

AI models perceive images as complex mathematical data points representing each pixel’s color and position. By introducing imperceptible changes to this mathematical representation, PhotoGuard ensures the image remains visually unaltered to human observers while protecting it from unauthorized manipulation by AI models.

The “encoder” attack method introduces these artifacts by targeting the algorithmic model’s latent representation of the target image — the complex mathematical description of every pixel’s position and color in the image. As a result, the AI is essentially prevented from understanding the content.

On the other hand, the more advanced and computationally intensive “diffusion” attack method disguises an image as different in the eyes of the AI. It identifies a target image and optimizes its perturbations to resemble the target. Consequently, any edits the AI attempts to apply to these “immunized” images will be mistakenly applied to the fake “target” images, generating unrealistic-looking images.

“It aims to deceive the entire editing process, ensuring that the final edit diverges significantly from the intended outcome,” said Salman. “By exploiting the diffusion model’s behavior, this attack leads to edits that may be markedly different and potentially nonsensical compared to the user’s intended changes.”

Simplifying diffusion attack with fewer steps

The MIT CSAIL research team discovered that simplifying the diffusion attack with fewer steps enhances its practicality, even though it remains computationally intensive. Furthermore, the team said it is integrating additional robust perturbations to bolster the AI model’s protection against common image manipulations.

Although researchers acknowledge PhotoGuard’s promise, they also cautioned that it is not a foolproof solution. Malicious individuals could attempt to reverse-engineer protective measures by applying noise, cropping or rotating the image.

As a research proof-of-concept demo, the AI model is not currently ready for deployment, and the research team advises against using it to immunize photos at this stage.

“Making PhotoGuard a fully effective and robust tool would require developing versions of our AI model tailored to specific gen AI models that are present now and would emerge in the future,” said Salman. “That, of course, would require the cooperation of developers of these models, and securing such a broad cooperation might require some policy action.”

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle’s AI Mode gets new ‘Canvas’ feature, real-time help with Search Live, and more
Next Article OpenAI Launches Study Mode in ChatGPT: What is it?
Advanced AI Editor
  • Website

Related Posts

MIT researchers warn of ‘PACMAN’ M1 flaw that can’t be patched

July 28, 2025

‘Want More Women In STEM? Inspire Them Early.’

July 28, 2025

MIT robot could help people with limited mobility dress themselves

July 20, 2025

Comments are closed.

Latest Posts

Trump’s ‘Big Beautiful Bill’ Orders Museum to Relocate Space Shuttle

Thomas Kinkade Foundation Denounces DHS’s Usage of Painting

Millennial and Gen Z Gallerists Looking to ‘Redefine Success’ and more

Artlogic, ArtCloud Merge in Bid to Shape Art World’s Digital Backbone

Latest Posts

IBM: Shadow AI breaches cost $670K more, 97% of firms lack controls

July 30, 2025

How an MIT program planted the roots for Open Range and continues to shape Omaha’s startup ecosystem

July 30, 2025

Meta to spend up to $72B on AI infrastructure in 2025 as compute arms race escalates

July 30, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • IBM: Shadow AI breaches cost $670K more, 97% of firms lack controls
  • How an MIT program planted the roots for Open Range and continues to shape Omaha’s startup ecosystem
  • Meta to spend up to $72B on AI infrastructure in 2025 as compute arms race escalates
  • Leah Belsky on how AI is transforming education — the OpenAI Podcast Ep. 4
  • SoulGen Debuts AI-Powered Image-to-Video Platform, Ushering

Recent Comments

  1. 🔏 Security - Transfer 1.8 BTC incomplete. Fix here >> https://graph.org/OBTAIN-CRYPTO-07-23?hs=85ce984e332839165eff00f10a4fc17a& 🔏 on The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
  2. 💾 System: Transfer 0.5 Bitcoin incomplete. Verify now >> https://graph.org/OBTAIN-CRYPTO-07-23?hs=e1378433e58a7b696e3632102c97ef63& 💾 on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta
  3. 📞 Security; Transaction 0.5 BTC failed. Verify now => https://graph.org/OBTAIN-CRYPTO-07-23?hs=ec8b72524f993be230f3c8fd50d7bbae& 📞 on OpenAI Five: Dota Gameplay
  4. 📨 System: Transfer 0.5 Bitcoin on hold. Verify now => https://graph.org/OBTAIN-CRYPTO-07-23?hs=b25dab3fe579278f363cd6d123369e86& 📨 on New ChatGPT voice mode updates ⬇️
  5. 🖊 System; Deposit 0.3 Bitcoin failed. Authorize here => https://graph.org/OBTAIN-CRYPTO-07-23?hs=e9fac00a4f303105cc60c701c8ee35b9& 🖊 on Meta, Booz Allen develop ‘Space Llama’ AI system for the International Space Station

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.