Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

‘It’s how we use this for learning.’ Lenox and Lee schools partner with MIT to prepare students for the AI revolution | Central Berkshires

This AI Learns Faster Than Anything We’ve Seen!

ByteDance’s Doubao: China’s answer to GPT-4o is 50x cheaper and ready for action: Details – Technology News

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
MIT CSAIL

New automated framework brings provable privacy to black-box algorithms

By Advanced AI EditorApril 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Researchers from MIT CSAIL and Purdue University have proposed a new automated framework for privatizing black-box machine learning algorithms using PAC Privacy, a technique that quantifies privacy risk through statistical inference hardness. The study, titled PAC-Private Algorithms, was published in the 2025 IEEE Symposium on Security and Privacy (IEEE S&P) and authored by Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas.

While empirical defenses against privacy attacks like regularization and data augmentation have gained popularity, they often lack rigorous formal guarantees. This new research offers a way to mechanize provable privacy for a broad spectrum of real-world algorithms, including K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA), and Random Forests. By integrating novel simulation algorithms, anisotropic noise addition, and algorithmic stabilization, the researchers achieve provable resistance against adversarial inference with minimal utility loss.

How can we formally prove privacy for complex, black-box algorithms?

The central innovation in this study lies in translating the concept of Probably Approximately Correct (PAC) Privacy, previously a theoretical construct, into a practical mechanism for real-world algorithms. Unlike Differential Privacy (DP), which requires bounding worst-case sensitivity and can involve algorithmic restructuring, PAC Privacy allows privacy to be quantified via simulation on black-box models.

The authors introduce a new algorithm for determining the minimal anisotropic Gaussian noise required to satisfy mutual information bounds, thus ensuring that the adversary’s posterior inference advantage is provably constrained. Crucially, this approach avoids computationally expensive covariance matrix estimation and Singular Value Decomposition (SVD), which were bottlenecks in previous implementations.

The framework operates by assessing the stability of a given algorithm on subsampled datasets. By examining the variance in outputs from these subsets, the necessary noise to obfuscate sensitive information is calibrated. The methodology enables privatization of algorithms without requiring knowledge of internal mechanisms, offering a rare, provably secure solution for complex, black-box systems.

Can meaningful utility be preserved under strict privacy constraints?

One of the longstanding challenges in privacy-preserving computation is balancing utility with protection. The authors dissect this tension by categorizing algorithmic instability into intrinsic and superficial classes. Intrinsic instability reflects core volatility in algorithm behavior, while superficial instability can be smoothed through canonicalization techniques such as fixing cluster label orderings or aligning principal component orientations in PCA.

By reducing superficial instability and leveraging regularization to mitigate intrinsic instability, the team was able to enhance both privacy and performance. This creates what the authors call a “win-win” scenario: more stable algorithms not only generalize better in the traditional machine learning sense but also require less noise for privatization—directly improving utility.

Experiments across multiple datasets, including Iris, Rice, Dry Bean, and CIFAR-10, demonstrate strong empirical performance. PAC-private algorithms often achieved comparable or superior utility to differentially-private counterparts. In particular, anisotropic noise led to consistently better outcomes than isotropic alternatives, reinforcing the value of directional variance-based noise calibration.

For instance, in the K-Means clustering task, adding anisotropic noise preserved accuracy within a few percentage points of the non-private baseline for mutual information bounds as tight as 1/128. In PCA, restoration error remained under 5% for the Rice dataset even under stringent privacy budgets, while CIFAR-10 required fewer dimensions to remain stable under privatization.

Are PAC-private algorithms resilient against real-world attacks?

The robustness of the proposed framework was tested against state-of-the-art membership inference attacks, particularly the Likelihood Ratio Attack (LIRA). Even under adversarial assumptions where attackers have full knowledge of the data distribution and privatization mechanism, PAC-private models demonstrated strong resistance.

For example, in the Iris dataset, the privatized K-Means algorithm reduced empirical posterior inference advantage from nearly 10% to just 2% at a mutual information bound of 1/128. SVMs and Random Forests showed similar reductions, especially when regularization techniques were employed to boost model stability.

The framework’s versatility is further emphasized by its applicability across a diverse set of algorithms. Random Forests, typically difficult to privatize due to high sensitivity and nonlinearity, were successfully adapted using a combination of structured feature splits and entropy-based regularization. Even without pruning or altering tree structures, the framework maintained model comparability and ensured privatized outputs stayed semantically meaningful.

A turning point for practical provable privacy

This research marks a significant advancement in privacy-preserving machine learning. The proposed framework provides a template for automated, provable privatization applicable to virtually any algorithm, sidestepping the need for complex white-box modifications and extending rigorous guarantees to black-box systems.

The findings have far-reaching implications. With increasing data regulation and public concern over algorithmic privacy, PAC Privacy offers a pathway to build trustworthy systems that are both usable and secure. The authors also note the potential for future work in breaking down algorithms like Stochastic Gradient Descent into phases that can each be privatized using this model, paving the way for scalable, black-box privacy guarantees even in deep learning.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIBM Stock Looks Smarter as New Hardware Gets Set to Ramp Up AI Agility
Next Article Stanford HAI’s annual report highlights rapid adoption and growing accessibility of powerful AI systems
Advanced AI Editor
  • Website

Related Posts

MIT robot could help people with limited mobility dress themselves

July 20, 2025

MIT develops a motion and task planning system for home robots

July 20, 2025

RBC joins Massachusetts Institute of Technology’s (MIT) CSAIL fintech research initiative on the role of AI in the future of finance

July 18, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

‘It’s how we use this for learning.’ Lenox and Lee schools partner with MIT to prepare students for the AI revolution | Central Berkshires

July 27, 2025

This AI Learns Faster Than Anything We’ve Seen!

July 27, 2025

ByteDance’s Doubao: China’s answer to GPT-4o is 50x cheaper and ready for action: Details – Technology News

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • ‘It’s how we use this for learning.’ Lenox and Lee schools partner with MIT to prepare students for the AI revolution | Central Berkshires
  • This AI Learns Faster Than Anything We’ve Seen!
  • ByteDance’s Doubao: China’s answer to GPT-4o is 50x cheaper and ready for action: Details – Technology News
  • Google launches Gemma to help developers build AI apps responsibly
  • Alibaba’s New Qwen3 Reasoning Model Tops OpenAI and Google Benchmarks in Major Open-Source Release

Recent Comments

  1. binance sign up on Inclusion Strategies in Workplace | Recruiting News Network
  2. Rejestracja on Online Education – How I Make My Videos
  3. Anonymous on AI, CEOs, and the Wild West of Streaming
  4. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  5. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.