View a PDF of the paper titled Do LLMs “know” internally when they follow instructions?, by Juyeon Heo and 7 other authors
View PDF
HTML (experimental)
Abstract:Instruction-following is crucial for building AI agents with large language models (LLMs), as these models must adhere strictly to user-provided constraints and guidelines. However, LLMs often fail to follow even simple and clear instructions. To improve instruction-following behavior and prevent undesirable outputs, a deeper understanding of how LLMs’ internal states relate to these outcomes is required. In this work, we investigate whether LLMs encode information in their representations that correlate with instruction-following success – a property we term knowing internally. Our analysis identifies a direction in the input embedding space, termed the instruction-following dimension, that predicts whether a response will comply with a given instruction. We find that this dimension generalizes well across unseen tasks but not across unseen instruction types. We demonstrate that modifying representations along this dimension improves instruction-following success rates compared to random changes, without compromising response quality. Further investigation reveals that this dimension is more closely related to the phrasing of prompts rather than the inherent difficulty of the task or instructions. This work provides insight into the internal workings of LLMs’ instruction-following, paving the way for reliable LLM agents.
Submission history
From: Juyeon Heo [view email]
[v1]
Fri, 18 Oct 2024 14:55:14 UTC (7,009 KB)
[v2]
Tue, 22 Oct 2024 15:20:00 UTC (7,051 KB)
[v3]
Fri, 25 Oct 2024 22:00:55 UTC (7,051 KB)
[v4]
Wed, 30 Oct 2024 14:06:12 UTC (7,051 KB)
[v5]
Fri, 28 Mar 2025 15:40:49 UTC (6,942 KB)