Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Kyruus builds a generative AI provider matching solution on AWS

James Cameron Joins Board of Stability AI

Lawsuit accusing Meta of stealing from Trump’s ‘Art of the Deal’ dismissed

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
You.com

Hey you, AI algorithm! Explain yourself!

By Advanced AI EditorJuly 21, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Imagine you decide to sell your home.

You contact a realtor and ask them to value your house. However, if the realtor doesn’t offer any insight into how they arrive at their estimate—for example, what factors about the home they considered and which factors they felt were most important—then you’re unlikely to have confidence in that estimate. You also won’t know how you might increase the value of your home.

TRAINING MACHINE LEARNING ALGORITHMS TO BE USEFUL

We have a similar situation with machine learning models. If I want to build a model that predicts the value of the homes in my neighborhood, I’d start by deciding which features of a home I think are relevant—the square footage, the number of bedrooms, and the number of bathrooms, for example. I would then find a list of recent home sales, build a set of data that included the features’ values, and note the final sale price of each home sold. I could then use this data to optimize or “train” a machine learning (ML) algorithm to predict a house’s value based on its features.

In the language of ML, the process of using a training set is called supervised learning, and the trained algorithm is called a model. The accuracy of supervised ML models is highly dependent upon their training set. The data must represent the entire problem space being modeled, and the data must be both accurate and unbiased. Considering our earlier example, once satisfied with the accuracy of my model, I can now cheerfully drive around my neighborhood estimating the value of other people’s homes by inputting their homes’ features.

THE NEED FOR INSIGHT

When presented with a prediction or decision, there are numerous reasons why we want to understand how that outcome was derived. One of the most important is to have confidence in the outcome. Understanding how predictions and decisions are made can also provide actionable insights into the situation being modeled. For example, if I know that my realtor attributes more value to a new kitchen than a new driveway, I can target any future investment in my home to maximize its value. 

INTERPRETABILITY AND EXPLAINABILITY

The situation is similar for ML models. I want to have insight into how they’ve arrived at their predictions and classifications. Not only will this build my confidence in a model’s outcomes, but it can help me improve the performance of the model, such as by identifying gaps in the input feature set or gaps and biases in the training data.

Additionally, if I am using ML to predict future problems, then having insight into how the prediction was arrived at can help me identify the likely root cause of the problem. I can then use this insight to address the root cause before the predicted problem actually occurs.

APPROACHES TO EXPLAINABLE AI

In the world of AI, an ML model is said to be interpretable or explainable if a human can understand how the model arrives at its outcomes. This could relate to the model’s overall behavior, referred to as global interpretability, or a specific outcome, referred to as local interpretability or explainability.

There are many different types of supervised ML algorithms. Some algorithms are inherently interpretable, such as decision trees and linear regression models, especially with the help of analytical techniques.

However, it is impossible for a human to directly interpret highly complex “black box” ML models, such as deep learning neural networks, which can contain millions of operations and weights. One approach to this problem is to use a simple interpretable model to approximate the behavior of a more complex and accurate model. That way, we can have at least some insight into how the complex model works.

In the world of image recognition, features actually emerge in the neural network through the training process. For example, certain parts of a neural network can be shown to activate in response to specific components of an input image—edges, different types of texture, or faces, for example.

It is possibly only a matter of time before large language models like ChatGPT and Bard will provide explanations that justify their output. Whether we should trust these explanations any more than the output itself is an interesting question.

THE FLIP SIDE OF EXPLAINABILITY

AI explainability can also be used by an adversary. For example, if I understand how a credit scoring model works, I may be able to game the system by manipulating the answers I submit.

In the world of image recognition, researchers demonstrated how they could force a particular model to always classify images as a toaster by adding a specially crafted item to the input image. A worrying use of this approach would be to use explainability analysis of a security image recognition system to find ways to circumvent it.

CONCLUSION

Although we hear about developments in the application of AI almost daily, we hear little about the need for explainability. However, as the adoption of AI accelerates, the need for explainable AI is becoming increasingly important—and not only for the reasons already discussed but also in the context of risk and compliance.

Fortunately, the data science community has been researching different approaches to explainability for many years. For example, Shapley value analysis was first postulated in 1956, while the last decade has seen a lot of progress in relation to deep learning AI/ML. Nevertheless, most current approaches to AI interpretability and explainability are far too complex for anyone but a data scientist to use and interpret.

What we need are AI algorithms that can reliably explain themselves to their users. And yet, if we ignore the need for explainability in AI and ML, it will be at our peril as we throw ourselves to the mercy of algorithms we simply don’t understand.

Paul Barrett is CTO for NETSCOUT, overseeing development of network assurance and cybersecurity tech for the world’s largest organizations.

The super-early-rate deadline for Fast Company’s Most Innovative Companies Awards is Friday, July 25, at 11:59 p.m. PT. Apply today.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIBM launches global entrance test for MBA, MCA, MSc admissions | Bengaluru News
Next Article Perplexity Plans to Bring Comet AI Browser to Smartphones
Advanced AI Editor
  • Website

Related Posts

You, AI, and the Brands You Love

July 15, 2025

The cheapest new 2026 iPad might be the only tablet you’ll ever need

July 11, 2025

That Vibe Code You Just Shipped? How Today’s AI Speed Creates Tomorrow’s Sinkhole

July 10, 2025

Comments are closed.

Latest Posts

Nonprofit Files Case Accusing Russia of Plundering Ukrainian Culture

Artist Raymond Saunders Dies at 90

Fine Arts Museums of San Francisco Lay Off 12 Staff

Sam Gilliam Foundation, David Kordansky Sued Over ‘Disavowed’ Painting

Latest Posts

Kyruus builds a generative AI provider matching solution on AWS

July 21, 2025

James Cameron Joins Board of Stability AI

July 21, 2025

Lawsuit accusing Meta of stealing from Trump’s ‘Art of the Deal’ dismissed

July 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Kyruus builds a generative AI provider matching solution on AWS
  • James Cameron Joins Board of Stability AI
  • Lawsuit accusing Meta of stealing from Trump’s ‘Art of the Deal’ dismissed
  • Google DeepMind to fund CASP, as NIH funding runs out
  • OpenAI wins gold at prestigious math competition – why that matters more than you think

Recent Comments

  1. fpmarkGoods on How Cursor and Claude Are Developing AI Coding Tools Together
  2. avenue17 on Local gov’t reps say they look forward to working with Thomas
  3. Lucky Star on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  4. микрокредит on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. www.binance.com注册 on MGX, Bpifrance, Nvidia, and Mistral AI plan 1.4GW Paris data center campus

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.