Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Kai-Fu Lee: AI Superpowers – China and Silicon Valley | Lex Fridman Podcast #27

1984, but with LLM’s – by Gary Marcus

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » Our 2025 Responsible AI Transparency Report: How we build, support our customers, and grow
Customer Service AI

Our 2025 Responsible AI Transparency Report: How we build, support our customers, and grow

Advanced AI EditorBy Advanced AI EditorJune 20, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In May 2024, we released our inaugural Responsible AI Transparency Report. We’re grateful for the feedback we received from our stakeholders around the world. Their insights have informed this second annual Responsible AI Transparency Report, which underscores our continued commitment to building AI technologies that people trust. Our report highlights new developments related to how we build and deploy AI systems responsibly, how we support our customers and the broader ecosystem, and how we learn and evolve. 

The past year has seen a wave of AI adoption by organizations of all sizes, prompting a renewed focus on effective AI governance in practice. Our customers and partners are eager to learn about how we have scaled our program at Microsoft and developed tools and practices that operationalize high-level norms. 

Like us, they have found that building trustworthy AI is good for business, and that good governance unlocks AI opportunities. According to IDC’s Microsoft Responsible AI Survey that gathered insights on organizational attitudes and the state of responsible AI, over 30% of the respondents note the lack of governance and risk management solutions as the top barrier to adopting and scaling AI. Conversely, more than 75% of the respondents who use responsible AI tools for risk management say that they have helped with data privacy, customer experience, confident business decisions, brand reputation, and trust.

We’ve also seen new regulatory efforts and laws emerge over the past year. Because we’ve invested in operationalizing responsible AI practices at Microsoft for close to a decade, we’re well prepared to comply with these regulations and to empower our customers to do the same. Our work here is not done, however. As we detail in the report, efficient and effective regulation and implementation practices that support the adoption of AI technology across borders are still being defined. We remain focused on contributing our practical insights to standard- and norm-setting efforts around the world. 

Across all these facets of governance, it’s important to remain nimble in our approach, applying learnings from our real-world deployments, updating our practices to reflect advances in the state-of-the-art, and ensuring that we are responsive to feedback from our stakeholders. Learnings from our principled and iterative approach are reflected in the pages of this report. As our governance practices continue to evolve, we’ll proactively share our fresh insights with our stakeholders, both in future annual transparency reports and other public settings.

Key takeaways from our 2025 Transparency Report 

In 2024, we made key investments in our responsible AI tools, policies, and practices to move at the speed of AI innovation.

We improved our responsible AI tooling to provide expanded risk measurement and mitigation coverage for modalities beyond text—like images, audio, and video—and additional support for agentic systems, semi-autonomous systems that we anticipate will represent a significant area of AI investment and innovation in 2025 and beyond. 
We took a proactive, layered approach to compliance with new regulatory requirements, including the European Union’s AI Act, and provided our customers with resources and materials that empower them to innovate in line with relevant regulations. Our early investments in building a comprehensive and industry-leading responsible AI program positioned us well to shift our AI regulatory readiness efforts into high gear in 2024. 
We continued to apply a consistent risk management approach across releases through our pre-deployment review and red teaming efforts. This included oversight and review of high-impact and higher-risk uses of AI and generative AI releases, including every flagship model added to the Azure OpenAI Service and every Phi model release. To further support responsible AI documentation as part of these reviews, we launched an internal workflow tool designed to centralize the various responsible AI requirements outlined in the Responsible AI Standard. 
We continued to provide hands-on counseling for high-impact and higher-risk uses of AI through our Sensitive Uses and Emerging Technologies team. Generative AI applications, especially in fields like healthcare and the sciences, were notable growth areas in 2024. By gleaning insights across cases and engaging researchers, the team provided early guidance for novel risks and emerging AI capabilities, enabling innovation and incubating new internal policies and guidelines. 
We continued to lean on insights from research to inform our understanding of sociotechnical issues related to the latest advancements in AI. We established the AI Frontiers Lab to invest in the core technologies that push the frontier of what AI systems can do in terms of capability, efficiency, and safety.  
We worked with stakeholders around the world to make progress towards building coherent governance approaches to help accelerate adoption and allow organizations of all kinds to innovate and use AI across borders. This included publishing a book exploring governance across various domains and helping advance cohesive standards for testing AI systems.

Looking ahead to the second half of 2025 and beyond 

As AI innovation and adoption continue to advance, our core objective remains the same: earning the trust that we see as foundational to fostering broad and beneficial AI adoption around the world. As we continue that journey over the next year, we will focus on three areas to progress our steadfast commitment to AI governance while ensuring that our efforts are responsive to an ever-evolving landscape: 

Developing more flexible and agile risk management tools and practices, while fostering skills development to anticipate and adapt to advances in AI. To ensure people and organizations around the world can leverage the transformative potential of AI, our ability to anticipate and manage the risks of AI must keep pace with AI innovation. This requires us to build tools and practices that can quickly adapt to advances in AI capabilities and the growing diversity of deployment scenarios that each have unique risk profiles. To do this, we will make greater investments in our systems of risk management to provide tools and practices for the most common risks across deployment scenarios, and also enable the sharing of test sets, mitigations, and other best practices across teams at Microsoft.
Supporting effective governance across the AI supply chain. Building, earning, and keeping trust in AI is a collaborative endeavor that requires model developers, app builders, and system users to each contribute to trustworthy design, development, and operations. AI regulations, including the EU AI Act, reflect this need for information to flow across supply chain actors. While we embrace this concept of shared responsibility at Microsoft, we also recognize that pinning down how responsibilities fit together is complex, especially in a fast-changing AI ecosystem. To help advance shared understanding of how this can work in practice, we’re deepening our work internally and externally to clarify roles and expectations.
Advancing a vibrant ecosystem through shared norms and effective tools, particularly for AI risk measurement and evaluation. The science of AI risk measurement and evaluation is a growing but still nascent field. We are committed to supporting the maturation of this field by continuing to make investments within Microsoft, including in research that pushes the frontiers of AI risk measurement and evaluation and the tooling to operationalize it at scale. We remain committed to sharing our latest advancements in tooling and best practices with the broader ecosystem to support the advancement of shared norms and standards for AI risk measurement and evaluation.

We look forward to hearing your feedback on the progress we have made and opportunities to collaborate on all that is still left to do. Together, we can advance AI governance efficiently and effectively, fostering trust in AI systems at a pace that matches the opportunities ahead. 
Explore the 2025 Responsible AI Transparency Report.  

Tags: AI, AI for Good Lab, artificial intelligence



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleA timeline of the US semiconductor market in 2025
Next Article Perplexity AI rolls out video generation feature on X for ‘Ask Perplexity’ users
Advanced AI Editor
  • Website

Related Posts

Blue Collar Workers Add AI to Their Toolboxes

June 20, 2025

NiCE Launches CXone Mpower AI Agents

June 20, 2025

Silverback Chatbot Introduces Advanced AI Agent Feature to Enhance Customer Support and Workflow Automation

June 20, 2025
Leave A Reply Cancel Reply

Latest Posts

Real-Life Matchmaker Lauren Daddis Talks Accuracy Of ‘Materialists’

An Apartment By One Of Mexico’s Buzziest Designers Is Open To Book In San Miguel

Songtsam Resorts Launch Collaboration Inspired By Tibet’s Sacred Lake

Spanish Supreme Court Orders Heirs to Return Cathedral Statues

Latest Posts

Kai-Fu Lee: AI Superpowers – China and Silicon Valley | Lex Fridman Podcast #27

June 21, 2025

1984, but with LLM’s – by Gary Marcus

June 21, 2025

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.