Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

China PM warns against a global AI ‘monopoly’

MIT faces backlash for not expelling anti-Israel protesters over ‘visa issues’: ‘Who is in charge?’

New QWEN 3 Coder : Did the Benchmark’s Lie?

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Stanford HAI

Stanford HAI says generative AI model transparency is improving, but there’s a long way to go

By Advanced AI EditorJuly 25, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Researchers from Stanford University today published an update to their Foundation Model Transparency Index, which looks at the transparency of popular generative artificial intelligence models such as OpenAI’s GPT family, Google LLC’s Gemini models and Meta Platforms Inc.’s Llama series.

The FMTI, which was first published in October, is designed to assess the transparency of some of the most widely used foundational large language models or LLMs. The aim is to increase accountability, address the societal impact of generative AI, and attempt to encourage developers to be more transparent about how they’re trained and the way they operate.

Created by Stanford’s Human-Centered Artificial Intelligence research group, Stanford HAI, the FMTI incorporates a wide range of metrics that consider how much developers disclose about their models, plus information on how people are using their systems.

The initial findings were somewhat negative, illustrating how most foundational LLMs are shrouded in secrecy, including the open-source ones. That said, open-source models such as Meta’s Llama 2 and BigScience’s BloomZ were notably more transparent than their closed-source peers, such as OpenAI’s GPT-4 Turbo.

Improved transparency of LLMs

For the latest FMTI, Stanford HAI’s team of researchers evaluated 14 major foundation model developers, including OpenAI, Google, Meta Anthropic PBC, AI21 Labs Inc., IBM Corp., Mistral and Stability AI Ltd., using 233 transparency indicators. The findings were somewhat better this time around, with the researchers saying they were pleased to see a significant improvement in the level of transparency around AI models since last year.

The FMTI gives each LLM a rating of between 0 and 100, with more points equating to more transparency. Overall, the LLMs’ transparency scores were much improved from six months ago, with an average gain of 21 points across the 14 models it evaluated, HAI said. In addition, the researchers noted some considerable improvements from specific companies, with AI21Labs increasing its transparency score by 50 points, followed by Hugging Face Inc. and Amazon.com, whose scores rose by 32 points and 29 points, respectively.

A long way to go

On the downside, the authors said the gap between open- and closed-source models remains more or less the same. Whereas the median open model scores 59 points, the median closed-source model was 53.5 points. According to the researchers, these findings suggest that the open-source AI model development process is inherently more transparent than the latter, though it does not necessarily imply more transparency in terms of training data, compute or usage.

The researchers were also somewhat disappointed to see that there was little progress made in terms of transparency regarding the data that fuels LLMs, or their real-world impact. AI models developer. Specifically, AI developers continue to keep their cards close to their chest with regard to the copyrighted data they use to train their models, who has access to that data, and how effective their AI guardrails are. Moreover, few developers were willing to share what they know in terms of the downstream impact of their LLMs, such as how people are using them and where those users are located.

Moving forward

That said, Stanford HAI’s team is encouraged, not only by the progress made so far, but also by the willingness of AI developers to engage with its researchers. It reported that a number of LLM developers have even been influenced by the FTMI, which has caused them to reflect on their internal practices.

Given that the average transparency score of 58 is still somewhat low, Stanford HAI concluded that there’s still lots of room for improvement. Moreover, it urged AI developers to do this, saying transparency around AI is essential not only for public accountability and effective governance, but also for scientific innovation.

Looking forward, the researchers said, it’s asking AI developers to publish their own transparency reports for each major foundation model they release, in-line with voluntary codes of conduct recommended by the U.S. government and G7 organizations.

It also had some advice for policymakers, saying its report can help to illustrate where policy intervention might help to increase AI transparency.

Featured image: SiliconANGLE/Microsoft Designer; others: Stanford HAI

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  

CUBE Alumni Network

C-level and Technical

Domain Experts

Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticlePaper page – TTS-VAR: A Test-Time Scaling Framework for Visual Auto-Regressive Generation
Next Article OpenAI’s most capable AI model, GPT-5, may be coming in August
Advanced AI Editor
  • Website

Related Posts

Accenture Becomes Inaugural Member of Corporate Affiliate Program at Stanford HAI

July 26, 2025

NVIDIA’s Yejin Choi Joins Stanford HAI

July 25, 2025

Stanford HAI Welcomes Six Distinguished Scholars as Senior Fellows

July 17, 2025

Comments are closed.

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

China PM warns against a global AI ‘monopoly’

July 26, 2025

MIT faces backlash for not expelling anti-Israel protesters over ‘visa issues’: ‘Who is in charge?’

July 26, 2025

New QWEN 3 Coder : Did the Benchmark’s Lie?

July 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • China PM warns against a global AI ‘monopoly’
  • MIT faces backlash for not expelling anti-Israel protesters over ‘visa issues’: ‘Who is in charge?’
  • New QWEN 3 Coder : Did the Benchmark’s Lie?
  • MIT student interrupts math lecture to chant ‘Free Palestine’
  • Major Health Insurers Slash Prior Authorization Requirements, Transforming the PA Technology Landscape

Recent Comments

  1. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  2. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  3. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’
  4. 打开Binance账户 on Tanka CEO Kisson Lin to talk AI-native startups at Sessions: AI
  5. Sign up to get 100 USDT on The Do LaB On Capturing Lightning In A Bottle

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.