Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

$750 Target Stays as Analysts Expect AI Gaps to Close

A.I. May Be the Future, but First It Has to Study Ancient Roman History

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Tools & Product Releases

Taking a responsible path to AGI

By Advanced AI EditorApril 19, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


We’re exploring the frontiers of AGI, prioritizing readiness, proactive risk assessment, and collaboration with the wider AI community.

Artificial general intelligence (AGI), AI that’s at least as capable as humans at most cognitive tasks, could be here within the coming years.

Integrated with agentic capabilities, AGI could supercharge AI to understand, reason, plan, and execute actions autonomously. Such technological advancement will provide society with invaluable tools to address critical global challenges, including drug discovery, economic growth and climate change.

This means we can expect tangible benefits for billions of people. For instance, by enabling faster, more accurate medical diagnoses, it could revolutionize healthcare. By offering personalized learning experiences, it could make education more accessible and engaging. By enhancing information processing, AGI could help lower barriers to innovation and creativity. By democratising access to advanced tools and knowledge, it could enable a small organization to tackle complex challenges previously only addressable by large, well-funded institutions.

Navigating the path to AGI

We’re optimistic about AGI’s potential. It has the power to transform our world, acting as a catalyst for progress in many areas of life. But it is essential with any technology this powerful, that even a small possibility of harm must be taken seriously and prevented.

Mitigating AGI safety challenges demands proactive planning, preparation and collaboration. Previously, we introduced our approach to AGI in the “Levels of AGI” framework paper, which provides a perspective on classifying the capabilities of advanced AI systems, understanding and comparing their performance, assessing potential risks, and gauging progress towards more general and capable AI.

Today, we’re sharing our views on AGI safety and security as we navigate the path toward this transformational technology. This new paper, titled, An Approach to Technical AGI Safety & Security, is a starting point for vital conversations with the wider industry about how we monitor AGI progress, and ensure it’s developed safely and responsibly.

In the paper, we detail how we’re taking a systematic and comprehensive approach to AGI safety, exploring four main risk areas: misuse, misalignment, accidents, and structural risks, with a deeper focus on misuse and misalignment.

Understanding and addressing the potential for misuse

Misuse occurs when a human deliberately uses an AI system for harmful purposes.

Improved insight into present-day harms and mitigations continues to enhance our understanding of longer-term severe harms and how to prevent them.

For instance, misuse of present-day generative AI includes producing harmful content or spreading inaccurate information. In the future, advanced AI systems may have the capacity to more significantly influence public beliefs and behaviors in ways that could lead to unintended societal consequences.

The potential severity of such harm necessitates proactive safety and security measures.

As we detail in the paper, a key element of our strategy is identifying and restricting access to dangerous capabilities that could be misused, including those enabling cyber attacks.

We’re exploring a number of mitigations to prevent the misuse of advanced AI. This includes sophisticated security mechanisms which could prevent malicious actors from obtaining raw access to model weights that allow them to bypass our safety guardrails; mitigations that limit the potential for misuse when the model is deployed; and threat modelling research that helps identify capability thresholds where heightened security is necessary. Additionally, our recently launched cybersecurity evaluation framework takes this work step a further to help mitigate against AI-powered threats.

Even today, we regularly evaluate our most advanced models, such as Gemini, for potential dangerous capabilities. Our Frontier Safety Framework delves deeper into how we assess capabilities and employ mitigations, including for cybersecurity and biosecurity risks.

The challenge of misalignment

For AGI to truly complement human abilities, it has to be aligned with human values. Misalignment occurs when the AI system pursues a goal that is different from human intentions.

We have previously shown how misalignment can arise with our examples of specification gaming, where an AI finds a solution to achieve its goals, but not in the way intended by the human instructing it, and goal misgeneralization.

For example, an AI system asked to book tickets to a movie might decide to hack into the ticketing system to get already occupied seats – something that a person asking it to buy the seats may not consider.

We’re also conducting extensive research on the risk of deceptive alignment, i.e. the risk of an AI system becoming aware that its goals do not align with human instructions, and deliberately trying to bypass the safety measures put in place by humans to prevent it from taking misaligned action.

Countering misalignment

Our goal is to have advanced AI systems that are trained to pursue the right goals, so they follow human instructions accurately, preventing the AI using potentially unethical shortcuts to achieve its objectives.

We do this through amplified oversight, i.e. being able to tell whether an AI’s answers are good or bad at achieving that objective. While this is relatively easy now, it can become challenging when the AI has advanced capabilities.

As an example, even Go experts didn’t realize how good Move 37, a move that had a 1 in 10,000 chance of being used, was when AlphaGo first played it.

To address this challenge, we enlist the AI systems themselves to help us provide feedback on their answers, such as in debate.

Once we can tell whether an answer is good, we can use this to build a safe and aligned AI system. A challenge here is to figure out what problems or instances to train the AI system on. Through work on robust training, uncertainty estimation and more, we can cover a range of situations that an AI system will encounter in real-world scenarios, creating AI that can be trusted.

Through effective monitoring and established computer security measures, we’re aiming to mitigate harm that may occur if our AI systems did pursue misaligned goals.

Monitoring involves using an AI system, called the monitor, to detect actions that don’t align with our goals. It is important that the monitor knows when it doesn’t know whether an action is safe. When it is unsure, it should either reject the action or flag the action for further review.

Enabling transparency

All this becomes easier if the AI decision making becomes more transparent. We do extensive research in interpretability with the aim to increase this transparency.

To facilitate this further, we’re designing AI systems that are easier to understand.

For example, our research on Myopic Optimization with Nonmyopic Approval (MONA) aims to ensure that any long-term planning done by AI systems remains understandable to humans. This is particularly important as the technology improves. Our work on MONA is the first to demonstrate the safety benefits of short-term optimization in LLMs.

Building an ecosystem for AGI readiness

Led by Shane Legg, Co-Founder and Chief AGI Scientist at Google DeepMind, our AGI Safety Council (ASC) analyzes AGI risk and best practices, making recommendations on safety measures. The ASC works closely with the Responsibility and Safety Council, our internal review group co-chaired by our COO Lila Ibrahim and Senior Director of Responsibility Helen King, to evaluate AGI research, projects and collaborations against our AI Principles, advising and partnering with research and product teams on our highest impact work.

Our work on AGI safety complements our depth and breadth of responsibility and safety practices and research addressing a wide range of issues, including harmful content, bias, and transparency. We also continue to leverage our learnings from safety in agentics, such as the principle of having a human in the loop to check in for consequential actions, to inform our approach to building AGI responsibly.

Externally, we’re working to foster collaboration with experts, industry, governments, nonprofits and civil society organizations, and take an informed approach to developing AGI.

For example, we’re partnering with nonprofit AI safety research organizations, including Apollo and Redwood Research, who have advised on a dedicated misalignment section in the latest version of our Frontier Safety Framework.

Through ongoing dialogue with policy stakeholders globally, we hope to contribute to international consensus on critical frontier safety and security issues, including how we can best anticipate and prepare for novel risks.

Our efforts include working with others in the industry – via organizations like the Frontier Model Forum – to share and develop best practices, as well as valuable collaborations with AI Institutes on safety testing. Ultimately, we believe a coordinated international approach to governance is critical to ensure society benefits from advanced AI systems.

Educating AI researchers and experts on AGI safety is fundamental to creating a strong foundation for its development. As such, we’ve launched a new course on AGI Safety for students, researchers and professionals interested in this topic.

Ultimately, our approach to AGI safety and security serves as a vital roadmap to address the many challenges that remain open. We look forward to collaborating with the wider AI research community to advance AGI responsibly and help us unlock the immense benefits of this technology for all.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleVibing, AI Style | Hackaday
Next Article Video Game Coding with OpenAI o1
Advanced AI Editor
  • Website

Related Posts

Sam Altman Says OpenAI Is Poised to Wipe Out Entire Categories of Human Jobs

July 26, 2025

Intel Expects Workforce Layoffs | Recruiting News Network

July 25, 2025

HR Job Postings Require AI Skills

July 25, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

$750 Target Stays as Analysts Expect AI Gaps to Close

July 27, 2025

A.I. May Be the Future, but First It Has to Study Ancient Roman History

July 27, 2025

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • $750 Target Stays as Analysts Expect AI Gaps to Close
  • A.I. May Be the Future, but First It Has to Study Ancient Roman History
  • OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News
  • This Indian With IIT, MIT Degree Could Have Received Rs 800 Crore Joining Bonus Ast Meta! – Trak.in
  • Beijing Is Using Soft Power to Gain Global Dominance

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.