View a PDF of the paper titled LLM Safety Alignment is Divergence Estimation in Disguise, by Rajdeep Haldar and 4 other authors
View PDF
HTML (experimental)
Abstract:We present a theoretical framework showing that popular LLM alignment methods, including RLHF and its variants, can be understood as divergence estimators between aligned (safe or preferred) and unaligned (harmful or less preferred) distributions. This perspective explains the emergence of separation in the latent space between safe and harmful prompts after alignment. As an application of our general divergence framework, we propose KLDO, a novel KL divergence-based alignment method, and empirically validate its effectiveness. We further show that using compliance-refusal datasets, rather than standard preference-based datasets, leads to stronger separation and improved safety alignment. Finally, to quantify the separation effect, we propose a distance-based metric in the prompt representation space, which also acts as a statistically significant indicator for model safety.
Submission history
From: Rajdeep Haldar [view email]
[v1]
Sun, 2 Feb 2025 04:09:42 UTC (2,578 KB)
[v2]
Sun, 1 Jun 2025 21:50:53 UTC (3,669 KB)