Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Liquid Splash Modeling With Neural Networks

Ray Dalio: Artificial Intelligence Principles | AI Podcast Clips

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Representation Engineering: a New Way of Understanding Models
Center for AI Safety

Representation Engineering: a New Way of Understanding Models

Advanced AI BotBy Advanced AI BotApril 2, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Reading the minds of LLMs

Interpreting and controlling models has long been a significant challenge. Our research ‘Representation Engineering: A Top-Down Approach to AI Transparency’ explores a new way of understanding traits like honesty, power seeking, and morality in LLMs. We show that these traits can be identified live at the point of output, and they can also be controlled. This method differs from mechanistic approaches which focus on bottom-up interpretations of node to node connections. In contrast, representation engineering looks at larger chunks of representations and higher-level mechanisms to understand models. Overall, we believe that this ‘top-down’ method makes exciting progress towards model transparency, paving the way for more research and exploration into understanding and controlling AI.

Why understanding and controlling AI is important

Transparency and honesty are important features of models – as AI becomes more powerful, capable and autonomous, it is increasingly important that they are honest and predictable. If we cannot understand and control models, AI would have the capacity to lie, seek power and ultimately subvert the goals of humans. In time, this would present a substantial risk to society as AI advances. To fully reap the benefits of AI, we need to make sure that these risks are balanced with better understanding of LLMs.

‍The method of representation engineering: an overview

Representation Engineering is an innovative approach to enhancing our understanding and control of AI by observing and manipulating the internal representations – weights or activations – that AI uses to understand and process information. This method involves identifying specific sets of activations within an AI system that correspond to a model’s behavior. Furthermore, we can utilize these representations to control the model’s behavior.

In the paper, we first explore the representation of honesty in the model. To do this, we first ask the model to answer a question truthfully. Then we ask the model to answer the same question but with a lie. We observe the model state each time, and the resulting difference in the activations provide an insight into when a model is being honest and when a model lies. We can even tweak the internal representations of the model so that it becomes more honest, or less honest. We show that the same principles and approach applies across other concepts such as power seeking and happiness, and even across a number of other domains. This is an exciting and new approach to model transparency – and sheds light on not only honesty, but a variety of other desirable traits. 

Representation engineering: an analogy to neuroscience

Representation engineering is akin to observing human brain activity through MRI scans, where the focus is on understanding and modifying the internal workings to achieve desired outcomes. Just as MRI scans allow us to see which parts of the brain are activated during various tasks, enabling a detailed analysis of patterns and functions, representation engineering employs a similar method to understand AI’s decision-making processes. By adjusting the internal vectors that represent information within the AI, we can directly influence its ‘thought process’, much like how understanding brain activity can lead to targeted therapies in humans. 

‍Future directions and conclusion

Our hope is that this work will initiate new efforts and developments towards understanding and controlling complex AI systems. Whilst our approach improves performance on TruthfulQA materially, there is still progress to be made before full transparency and control is achieved. We welcome and encourage more research in the field of representation engineering.

‍

You can read the full paper here: https://arxiv.org/abs/2310.01405

You can find the website here: https://www.ai-transparency.org/

The Github is here: https://tinyurl.com/RepEgithub

We’ve recorded a video on RepE here: https://tinyurl.com/RepEvid

‍



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIBM Soars 32% in the Past Year: Should the Stock Be in Your Portfolio? – April 1, 2025
Next Article Leveraging Robust Features for Targeted Transfer Attacks
Advanced AI Bot
  • Website

Related Posts

A New Trick Could Block the Misuse of Open Source AI

June 7, 2025

A New Trick Could Block the Misuse of Open Source AI

June 7, 2025

A New Trick Could Block the Misuse of Open Source AI

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Latest Posts

Liquid Splash Modeling With Neural Networks

June 7, 2025

Ray Dalio: Artificial Intelligence Principles | AI Podcast Clips

June 7, 2025

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.