Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Germany’s move to block DeepSeek a ‘bias’ against Chinese firm, may harm cross-border tech cooperation: expert

How Claude AI Clawed Through Millions Of Books

As OpenAI and Microsoft Duel Over AGI, Here Are My Real-World Tests

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
VentureBeat AI

From hallucinations to hardware: Lessons from a real-world computer vision project gone sideways

Advanced AI EditorBy Advanced AI EditorJune 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

Computer vision projects rarely go exactly as planned, and this one was no exception. The idea was simple: Build a model that could look at a photo of a laptop and identify any physical damage — things like cracked screens, missing keys or broken hinges. It seemed like a straightforward use case for image models and large language models (LLMs), but it quickly turned into something more complicated.

Along the way, we ran into issues with hallucinations, unreliable outputs and images that were not even laptops. To solve these, we ended up applying an agentic framework in an atypical way — not for task automation, but to improve the model’s performance.

In this post, we will walk through what we tried, what didn’t work and how a combination of approaches eventually helped us build something reliable.

Where we started: Monolithic prompting

Our initial approach was fairly standard for a multimodal model. We used a single, large prompt to pass an image into an image-capable LLM and asked it to identify visible damage. This monolithic prompting strategy is simple to implement and works decently for clean, well-defined tasks. But real-world data rarely plays along.

We ran into three major issues early on:

Hallucinations: The model would sometimes invent damage that did not exist or mislabel what it was seeing.

Junk image detection: It had no reliable way to flag images that were not even laptops, like pictures of desks, walls or people occasionally slipped through and received nonsensical damage reports.

Inconsistent accuracy: The combination of these problems made the model too unreliable for operational use.

This was the point when it became clear we would need to iterate.

First fix: Mixing image resolutions

One thing we noticed was how much image quality affected the model’s output. Users uploaded all kinds of images ranging from sharp and high-resolution to blurry. This led us to refer to research highlighting how image resolution impacts deep learning models.

We trained and tested the model using a mix of high-and low-resolution images. The idea was to make the model more resilient to the wide range of image qualities it would encounter in practice. This helped improve consistency, but the core issues of hallucination and junk image handling persisted.

The multimodal detour: Text-only LLM goes multimodal

Encouraged by recent experiments in combining image captioning with text-only LLMs — like the technique covered in The Batch, where captions are generated from images and then interpreted by a language model, we decided to give it a try.

Here’s how it works:

The LLM begins by generating multiple possible captions for an image. 

Another model, called a multimodal embedding model, checks how well each caption fits the image. In this case, we used SigLIP to score the similarity between the image and the text.

The system keeps the top few captions based on these scores.

The LLM uses those top captions to write new ones, trying to get closer to what the image actually shows.

It repeats this process until the captions stop improving, or it hits a set limit.

While clever in theory, this approach introduced new problems for our use case:

Persistent hallucinations: The captions themselves sometimes included imaginary damage, which the LLM then confidently reported.

Incomplete coverage: Even with multiple captions, some issues were missed entirely.

Increased complexity, little benefit: The added steps made the system more complicated without reliably outperforming the previous setup.

It was an interesting experiment, but ultimately not a solution.

A creative use of agentic frameworks

This was the turning point. While agentic frameworks are usually used for orchestrating task flows (think agents coordinating calendar invites or customer service actions), we wondered if breaking down the image interpretation task into smaller, specialized agents might help.

We built an agentic framework structured like this:

Orchestrator agent: It checked the image and identified which laptop components were visible (screen, keyboard, chassis, ports).

Component agents: Dedicated agents inspected each component for specific damage types; for example, one for cracked screens, another for missing keys.

Junk detection agent: A separate agent flagged whether the image was even a laptop in the first place.

This modular, task-driven approach produced much more precise and explainable results. Hallucinations dropped dramatically, junk images were reliably flagged and each agent’s task was simple and focused enough to control quality well.

The blind spots: Trade-offs of an agentic approach

As effective as this was, it was not perfect. Two main limitations showed up:

Increased latency: Running multiple sequential agents added to the total inference time.

Coverage gaps: Agents could only detect issues they were explicitly programmed to look for. If an image showed something unexpected that no agent was tasked with identifying, it would go unnoticed.

We needed a way to balance precision with coverage.

The hybrid solution: Combining agentic and monolithic approaches

To bridge the gaps, we created a hybrid system:

The agentic framework ran first, handling precise detection of known damage types and junk images. We limited the number of agents to the most essential ones to improve latency.

Then, a monolithic image LLM prompt scanned the image for anything else the agents might have missed.

Finally, we fine-tuned the model using a curated set of images for high-priority use cases, like frequently reported damage scenarios, to further improve accuracy and reliability.

This combination gave us the precision and explainability of the agentic setup, the broad coverage of monolithic prompting and the confidence boost of targeted fine-tuning.

What we learned

A few things became clear by the time we wrapped up this project:

Agentic frameworks are more versatile than they get credit for: While they are usually associated with workflow management, we found they could meaningfully boost model performance when applied in a structured, modular way.

Blending different approaches beats relying on just one: The combination of precise, agent-based detection alongside the broad coverage of LLMs, plus a bit of fine-tuning where it mattered most, gave us far more reliable outcomes than any single method on its own.

Visual models are prone to hallucinations: Even the more advanced setups can jump to conclusions or see things that are not there. It takes a thoughtful system design to keep those mistakes in check.

Image quality variety makes a difference: Training and testing with both clear, high-resolution images and everyday, lower-quality ones helped the model stay resilient when faced with unpredictable, real-world photos.

You need a way to catch junk images: A dedicated check for junk or unrelated pictures was one of the simplest changes we made, and it had an outsized impact on overall system reliability.

Final thoughts

What started as a simple idea, using an LLM prompt to detect physical damage in laptop images, quickly turned into a much deeper experiment in combining different AI techniques to tackle unpredictable, real-world problems. Along the way, we realized that some of the most useful tools were ones not originally designed for this type of work.

Agentic frameworks, often seen as workflow utilities, proved surprisingly effective when repurposed for tasks like structured damage detection and image filtering. With a bit of creativity, they helped us build a system that was not just more accurate, but easier to understand and manage in practice.

Shruti Tiwari is an AI product manager at Dell Technologies.

Vadiraj Kulkarni is a data scientist at Dell Technologies.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWaveNet by Google DeepMind | Two Minute Papers #93
Next Article OpenAI turns to Google’s AI chips to power its products: source
Advanced AI Editor
  • Website

Related Posts

Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad

June 29, 2025

Kumo’s ‘relational foundation model’ predicts the future your LLM can’t see

June 29, 2025

The Hidden Costs of AI: Securing Inference in an Age of Attacks

June 29, 2025
Leave A Reply Cancel Reply

Latest Posts

Tituss Burgess Teams Up With Lyft To Offer Pride Weekend Discounts

‘Squid Game’ Director Hwang Dong-Hyuk On Making Seasons 2 And 3

Nathan Fielder’s The Rehearsal is One of Many Genre-Defying Projects.

From Royal Drawings To Rare Meteorites

Latest Posts

Germany’s move to block DeepSeek a ‘bias’ against Chinese firm, may harm cross-border tech cooperation: expert

June 29, 2025

How Claude AI Clawed Through Millions Of Books

June 29, 2025

As OpenAI and Microsoft Duel Over AGI, Here Are My Real-World Tests

June 29, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Germany’s move to block DeepSeek a ‘bias’ against Chinese firm, may harm cross-border tech cooperation: expert
  • How Claude AI Clawed Through Millions Of Books
  • As OpenAI and Microsoft Duel Over AGI, Here Are My Real-World Tests
  • Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad
  • 3D Printing With Filigree Patterns | Two Minute Papers #89

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.