Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

A New Trick Could Block the Misuse of Open Source AI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Is your AI product actually working? How to develop the right metric system
VentureBeat AI

Is your AI product actually working? How to develop the right metric system

Advanced AI BotBy Advanced AI BotApril 27, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

In my first stint as a machine learning (ML) product manager, a simple question inspired passionate debates across functions and leaders: How do we know if this product is actually working? The product in question that I managed catered to both internal and external customers. The model enabled internal teams to identify the top issues faced by our customers so that they could prioritize the right set of experiences to fix customer issues. With such a complex web of interdependencies among internal and external customers, choosing the right metrics to capture the impact of the product was critical to steer it towards success.

Not tracking whether your product is working well is like landing a plane without any instructions from air traffic control. There is absolutely no way that you can make informed decisions for your customer without knowing what is going right or wrong. Additionally, if you do not actively define the metrics, your team will identify their own back-up metrics. The risk of having multiple flavors of an ‘accuracy’ or ‘quality’ metric is that everyone will develop their own version, leading to a scenario where you might not all be working toward the same outcome.

For example, when I reviewed my annual goal and the underlying metric with our engineering team, the immediate feedback was: “But this is a business metric, we already track precision and recall.” 

First, identify what you want to know about your AI product

Once you do get down to the task of defining the metrics for your product — where to begin? In my experience, the complexity of operating an ML product with multiple customers translates to defining metrics for the model, too. What do I use to measure whether a model is working well? Measuring the outcome of internal teams to prioritize launches based on our models would not be quick enough; measuring whether the customer adopted solutions recommended by our model could risk us drawing conclusions from a very broad adoption metric (what if the customer didn’t adopt the solution because they just wanted to reach a support agent?).

Fast-forward to the era of large language models (LLMs) — where we don’t just have a single output from an ML model, we have text answers, images and music as outputs, too. The dimensions of the product that require metrics now rapidly increases — formats, customers, type … the list goes on.

Across all my products, when I try to come up with metrics, my first step is to distill what I want to know about its impact on customers into a few key questions. Identifying the right set of questions makes it easier to identify the right set of metrics. Here are a few examples:

Did the customer get an output? → metric for coverage

How long did it take for the product to provide an output? → metric for latency

Did the user like the output? → metrics for customer feedback, customer adoption and retention

Once you identify your key questions, the next step is to identify a set of sub-questions for ‘input’ and ‘output’ signals. Output metrics are lagging indicators where you can measure an event that has already happened. Input metrics and leading indicators can be used to identify trends or predict outcomes. See below for ways to add the right sub-questions for lagging and leading indicators to the questions above. Not all questions need to have leading/lagging indicators.

Did the customer get an output? → coverage

How long did it take for the product to provide an output? → latency

Did the user like the output? → customer feedback, customer adoption and retention

Did the user indicate that the output is right/wrong? (output)

Was the output good/fair? (input)

The third and final step is to identify the method to gather metrics. Most metrics are gathered at-scale by new instrumentation via data engineering. However, in some instances (like question 3 above) especially for ML based products, you have the option of manual or automated evaluations that assess the model outputs. While it’s always best to develop automated evaluations, starting with manual evaluations for “was the output good/fair” and creating a rubric for the definitions of good, fair and not good will help you lay the groundwork for a rigorous and tested automated evaluation process, too.

Example use cases: AI search, listing descriptions

The above framework can be applied to any ML-based product to identify the list of primary metrics for your product. Let’s take search as an example.

Question MetricsNature of MetricDid the customer get an output? → Coverage% search sessions with search results shown to customer
OutputHow long did it take for the product to provide an output? → LatencyTime taken to display search results for the userOutputDid the user like the output? → Customer feedback, customer adoption and retention

Did the user indicate that the output is right/wrong? (Output) Was the output good/fair? (Input)

% of search sessions with ‘thumbs up’ feedback on search results from the customer or % of search sessions with clicks from the customer

% of search results marked as ‘good/fair’ for each search term, per quality rubric

Output

Input

How about a product to generate descriptions for a listing (whether it’s a menu item in Doordash or a product listing on Amazon)?

Question MetricsNature of MetricDid the customer get an output? → Coverage% listings with generated description
OutputHow long did it take for the product to provide an output? → LatencyTime taken to generate descriptions to the userOutputDid the user like the output? → Customer feedback, customer adoption and retention

Did the user indicate that the output is right/wrong? (Output) Was the output good/fair? (Input)

% of listings with generated descriptions that required edits from the technical content team/seller/customer

% of listing descriptions marked as ‘good/fair’, per quality rubric

Output

Input

The approach outlined above is extensible to multiple ML-based products. I hope this framework helps you define the right set of metrics for your ML model.

Sharanya Rao is a group product manager at Intuit.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGrokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained)
Next Article China Throws Down AI Gauntlet to Nvidia and Broadcom in $8.2B State Funding Program
Advanced AI Bot
  • Website

Related Posts

The case for embedding audit trails in AI systems before scaling

June 14, 2025

Beyond GPT architecture: Why Google’s Diffusion approach could reshape LLM deployment

June 14, 2025

Just add humans: Oxford medical study underscores the missing link in chatbot testing

June 14, 2025
Leave A Reply Cancel Reply

Latest Posts

Love At First Stitch – One Woman’s Journey Preserving The Art Of Ralli

Los Angeles’ ‘No Kings’ Rally Showcases Handmade Protest Art

Why Annahstasia’s Brilliant Debut Is The Album We Need Now

Ringo Starr Rocks N.Y.C.’s Radio City With A Little Help From His Friends

Latest Posts

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 15, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 15, 2025

A New Trick Could Block the Misuse of Open Source AI

June 15, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.