Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

Smuggled Nvidia AI Chips Worth $1 Billion Flood Chinese Black Market Despite U.S. Export Controls

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Center for AI Safety

Representation Engineering: a New Way of Understanding Models

By Advanced AI EditorApril 2, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Reading the minds of LLMs

Interpreting and controlling models has long been a significant challenge. Our research ‘Representation Engineering: A Top-Down Approach to AI Transparency’ explores a new way of understanding traits like honesty, power seeking, and morality in LLMs. We show that these traits can be identified live at the point of output, and they can also be controlled. This method differs from mechanistic approaches which focus on bottom-up interpretations of node to node connections. In contrast, representation engineering looks at larger chunks of representations and higher-level mechanisms to understand models. Overall, we believe that this ‘top-down’ method makes exciting progress towards model transparency, paving the way for more research and exploration into understanding and controlling AI.

Why understanding and controlling AI is important

Transparency and honesty are important features of models – as AI becomes more powerful, capable and autonomous, it is increasingly important that they are honest and predictable. If we cannot understand and control models, AI would have the capacity to lie, seek power and ultimately subvert the goals of humans. In time, this would present a substantial risk to society as AI advances. To fully reap the benefits of AI, we need to make sure that these risks are balanced with better understanding of LLMs.

‍The method of representation engineering: an overview

Representation Engineering is an innovative approach to enhancing our understanding and control of AI by observing and manipulating the internal representations – weights or activations – that AI uses to understand and process information. This method involves identifying specific sets of activations within an AI system that correspond to a model’s behavior. Furthermore, we can utilize these representations to control the model’s behavior.

In the paper, we first explore the representation of honesty in the model. To do this, we first ask the model to answer a question truthfully. Then we ask the model to answer the same question but with a lie. We observe the model state each time, and the resulting difference in the activations provide an insight into when a model is being honest and when a model lies. We can even tweak the internal representations of the model so that it becomes more honest, or less honest. We show that the same principles and approach applies across other concepts such as power seeking and happiness, and even across a number of other domains. This is an exciting and new approach to model transparency – and sheds light on not only honesty, but a variety of other desirable traits. 

Representation engineering: an analogy to neuroscience

Representation engineering is akin to observing human brain activity through MRI scans, where the focus is on understanding and modifying the internal workings to achieve desired outcomes. Just as MRI scans allow us to see which parts of the brain are activated during various tasks, enabling a detailed analysis of patterns and functions, representation engineering employs a similar method to understand AI’s decision-making processes. By adjusting the internal vectors that represent information within the AI, we can directly influence its ‘thought process’, much like how understanding brain activity can lead to targeted therapies in humans. 

‍Future directions and conclusion

Our hope is that this work will initiate new efforts and developments towards understanding and controlling complex AI systems. Whilst our approach improves performance on TruthfulQA materially, there is still progress to be made before full transparency and control is achieved. We welcome and encourage more research in the field of representation engineering.

‍

You can read the full paper here: https://arxiv.org/abs/2310.01405

You can find the website here: https://www.ai-transparency.org/

The Github is here: https://tinyurl.com/RepEgithub

We’ve recorded a video on RepE here: https://tinyurl.com/RepEvid

‍



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIBM Soars 32% in the Past Year: Should the Stock Be in Your Portfolio? – April 1, 2025
Next Article Leveraging Robust Features for Targeted Transfer Attacks
Advanced AI Editor
  • Website

Related Posts

AI industry and researchers sign statement warning of ‘extinction’ risk

July 25, 2025

AI Poses ‘Extinction’ Risk, OpenAI, DeepMind, and Anthropic CEOs Say

July 19, 2025

Dan Hendrycks of the Center for AI Safety hopes he can prevent a catastrophe

July 19, 2025
Leave A Reply

Latest Posts

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Artist Loses Final Appeal in Case of Apologising for ‘Fishrot Scandal’

US Appeals Court Overturns $8.8 M. Trademark Judgement For Yuga Labs

Latest Posts

New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

July 26, 2025

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

July 26, 2025

Smuggled Nvidia AI Chips Worth $1 Billion Flood Chinese Black Market Despite U.S. Export Controls

July 25, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples
  • AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B
  • Smuggled Nvidia AI Chips Worth $1 Billion Flood Chinese Black Market Despite U.S. Export Controls
  • Claude Code AI Automations for Community Management in 2025
  • Earnings Shock: Why IBM, Chipotle, and American Airlines Tumbled—and What Comes Next

Recent Comments

  1. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’
  2. 打开Binance账户 on Tanka CEO Kisson Lin to talk AI-native startups at Sessions: AI
  3. Sign up to get 100 USDT on The Do LaB On Capturing Lightning In A Bottle
  4. binance Anmeldebonus on David Patterson: Computer Architecture and Data Storage | Lex Fridman Podcast #104
  5. nude on Brain-to-voice neuroprosthesis restores naturalistic speech

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.