Prioritizing feature consistency in sparse autoencoders improves mechanistic interpretability of neural networks by ensuring reliable and interpretable features.
Sparse Autoencoders (SAEs) are a prominent tool in mechanistic
interpretability (MI) for decomposing neural network activations into
interpretable features. However, the aspiration to identify a canonical set of
features is challenged by the observed inconsistency of learned SAE features
across different training runs, undermining the reliability and efficiency of
MI research. This position paper argues that mechanistic interpretability
should prioritize feature consistency in SAEs — the reliable convergence to
equivalent feature sets across independent runs. We propose using the Pairwise
Dictionary Mean Correlation Coefficient (PW-MCC) as a practical metric to
operationalize consistency and demonstrate that high levels are achievable
(0.80 for TopK SAEs on LLM activations) with appropriate architectural choices.
Our contributions include detailing the benefits of prioritizing consistency;
providing theoretical grounding and synthetic validation using a model
organism, which verifies PW-MCC as a reliable proxy for ground-truth recovery;
and extending these findings to real-world LLM data, where high feature
consistency strongly correlates with the semantic similarity of learned feature
explanations. We call for a community-wide shift towards systematically
measuring feature consistency to foster robust cumulative progress in MI.