Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Roblox Solved The Physics Problem That Stumped Everyone!

How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI

Separable Subsurface Scattering – Unofficial talk by Károly Zsolnai

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Industry Applications

Inside SAS’s Push to Make AI Agents Accountable

By Advanced AI EditorMay 13, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


(Source: sdecoret/Shutterstock)

At SAS Innovate 2025 in Orlando, SAS unveiled its roadmap for agentic AI, making the case for its role as a company that has been quietly working on intelligent decision automation long before AI agents became a trending topic. The latest enhancements to its SAS Viya platform aim to help enterprises design, deploy, and govern AI agents that combine automation with ethical oversight.

While many tech vendors are racing to show off how many AI agents they can spin up at once, SAS CTO Bryan Harris dismisses such counts as a vanity metric. What really counts, he said, is not the quantity of agents but the quality of their output.

“The metric that matters,” Harris told AIwire, “is what kind of decisions you’re running in the enterprise, and what’s the value of those decisions to the business?”

How SAS Defines Agentic AI

Agentic AI, as defined by SAS, is not simply about automating tasks, but is about building systems that make decisions with a blend of reasoning, analytics, and embedded governance. The SAS Viya platform supports this vision by integrating deterministic models, machine learning algorithms, and large language models into a unified orchestration layer. The goal is to enable enterprises to deploy intelligent agents that are capable of acting autonomously when appropriate but also provide transparency and human oversight when the stakes are high.

SAS Innovate 2025. (Source: The Author)

Udo Sglavo, VP of applied AI and modeling R&D, described SAS’s agentic push as a natural evolution from the company’s consulting-driven past. “We’ve been doing this kind of modeling exercise for a long time, but typically it was a one-to-one relationship. You came to me with a problem, I’d send in consultants, they’d solve it, off we go,” Sglavo told AIwire. “Now the idea is, if you’ve done this ten, a hundred times for the same kind of challenge, why not take all this IP and put it into a software product?”

This shift from services to scalable solutions, according to Sglavo, has been accelerated by growing comfort with LLMs. “There’s been a mindset change. Customers are now more willing to adopt models they didn’t build themselves,” he said. That shift has cleared the way for wider adoption of prepackaged models and agent-based systems.

The Limits of Large Language Models

Both Harris and Sglavo emphasized that LLMs, despite their widespread appeal, are only one piece of a much larger enterprise AI picture. At SAS, LLMs are viewed as valuable but limited components that need to be paired with other forms of intelligence to drive reliable, repeatable decisions.

The SAS executives explained that unlike deterministic models, which return consistent outputs for the same inputs every time, LLMs can be unpredictable. “If I run a deterministic model with the same conditions a thousand times, I’ll get the same answer a thousand times,” Harris said. “That’s not the case for large language models.” This variability makes them ill-suited for high-stakes applications where auditability and control are critical. Instead, SAS uses LLMs where they excel: speeding up repetitive tasks and generating prototype solutions that humans or more deterministic systems can later refine.

One example of repetitive task speedup is in schema mapping, a task that often requires domain knowledge and painstaking manual review. With metadata as input, LLMs can rapidly suggest column matches and generate code, reducing a multi-week effort to minutes. However, because accuracy can vary, SAS integrates confidence scoring and always includes a human-in-the-loop to validate results.

In more advanced use cases, SAS has also implemented techniques that allow LLMs to iterate on their own outputs by revisiting earlier steps, rethinking mappings, and challenging initial assumptions. This iterative self-checking behavior is a key design principle in SAS’s agentic AI framework, where agents do not just accept the first answer but reason through problems dynamically.

Giving Agents a Goal

The key distinction SAS draws between traditional automation and agentic AI lies in goal orientation. Rather than simply executing a set of predefined instructions, agents are designed to pursue a defined goal and adjust their behavior dynamically until that goal is met. This capability reflects a shift in how organizations are thinking about AI, driven in part by the disillusionment that followed early enthusiasm around LLMs.

Udo Sglavo, SAS VP of Applied AI and Modeling R&D

Sglavo explained in an interview how many business leaders initially hoped that generative models would offer a kind of universal intelligence where you could drop in a business problem and get out a solution. Instead, LLMs proved best suited for narrow tasks like text analysis. The emergence of agentic AI, he said, represents an effort to combine the statistical, machine learning, and optimization techniques developed over decades with the newer capabilities of LLMs and retrieval-augmented knowledge systems.

In this framework, agents become orchestrators of those tools. Rather than being explicitly programmed for each step, they are handed an objective, such as increasing event registration numbers, and are then tasked with deciding how to achieve it. For example, an agent could generate emails, identify potential recipients using a statistical model, and continue refining its campaign until a defined target is reached.

This kind of agent, Sglavo noted, is well-suited for low-risk scenarios like marketing campaigns. But when the stakes are higher, such as decisions about credit approvals or healthcare outcomes, the approach must shift. Human-in-the-loop oversight becomes essential, and clear governance frameworks must define where autonomy ends and accountability begins.

Governance and Trust at the Core

The SAS executives stressed that agentic AI cannot be responsibly deployed without built-in governance. SAS Viya includes mechanisms to detect bias, evaluate fairness, and provide full transparency into how decisions are made. “We give our customers insight into when a model is deficient,” said Harris. ” And then they can make the choice to improve the data or improve the model.”

(Source: Suri_Studio/Shutterstock)

Governance also includes controls over how much autonomy agents are granted. This is especially critical in high-risk domains like finance, healthcare, and public services. SAS includes guardrails that ensure transparency and lets customers fine-tune how much autonomy agents are allowed.

SAS also emphasizes the importance of localized knowledge sources. Rather than relying on internet-sourced information, agents can be configured to draw only from enterprise-specific data repositories. Retrieval-augmented generation (RAG) setups enable agents to access internal knowledge bases to make contextual decisions without compromising security or accuracy.

A Marketplace of Agents Is Coming

Looking ahead, Sglavo expects agentic AI to evolve into an open marketplace, where enterprises can mix and match specialized agents from different vendors. In that future, decision-making will be distributed across interconnected agent networks that communicate and collaborate using shared protocols like MCP or Google’s open source A2A. This vision also redefines how enterprises think about deployment. Rather than shipping massive monolithic AI systems, companies will deploy nimble agents, each with a narrow focus but deep specialization.

“This will become the marketplace of agents,” Sglavo said. “Because while we may say we have the best supply chain optimization agent, another vendor may claim the same thing. And then it becomes a question of trust, pricing, track record. Have they done this before? Are they just a startup that’s good at tech but hasn’t worked with actual customers?”

Sglavo added that enterprises will want the flexibility to select and combine agents based on their needs. “You’ll say, I want to use this agent, this one, and this one—and just bring them all together.”

A Future Built on Accountable AI

Bryan Harris, CTO at SAS

As generative AI continues to capture headlines, SAS is placing its bet on decision-first AI. For companies in regulated sectors where the cost of a bad decision can be measured in lives or billions, the company argues, transparency and trust must come before experimentation or scale.

As the enterprise AI conversation shifts from experimental prototypes to more practical, accountable systems, SAS is staking out a space where trust, interoperability, and decision quality come first.

“You can’t prevent irresponsibility,” said Harris. “But we can give you the tools that allow you to make the right decision.”

Related



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe Authors Guild Sues the NEH and DOGE
Next Article Google Tests Swapping ‘I’m Feeling Lucky’ Button for ‘AI Mode’
Advanced AI Editor
  • Website

Related Posts

Rare earths conflict with China giving new life to old PCs, phones

July 13, 2025

AI startups believe Google’s Chrome is vulnerable to a new wave of intelligent browsers

July 13, 2025

SpaceX’s Crew-11 targets July 31 launch amid tight ISS schedule

July 12, 2025
Leave A Reply

Latest Posts

Homeland Security Targets Chicago’s National Museum of Puerto Rican Arts & Culture

1,600-Year-Old Tomb of Mayan City’s Founding King Discovered in Belize

Centre Pompidou Cancels Caribbean Art Show, Raising Controversy

‘Night at the Museum’ Reboot in the Works

Latest Posts

Roblox Solved The Physics Problem That Stumped Everyone!

July 13, 2025

How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI

July 13, 2025

Separable Subsurface Scattering – Unofficial talk by Károly Zsolnai

July 13, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Roblox Solved The Physics Problem That Stumped Everyone!
  • How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI
  • Separable Subsurface Scattering – Unofficial talk by Károly Zsolnai
  • Rare earths conflict with China giving new life to old PCs, phones
  • TC All Stage is tomorrow in Boston and prices go up by then

Recent Comments

  1. Index Home on Nvidia, OpenAI, Oracle back UAE-leg of global Stargate AI mega-project
  2. código de indicac~ao binance on [2505.13511] Can AI Freelancers Compete? Benchmarking Earnings, Reliability, and Task Success at Scale
  3. Compte Binance on Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
  4. Index Home on Artists Through The Eyes Of Artists’ At Pallant House Gallery
  5. código binance on Five takeaways from IBM Think 2025

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.