View a PDF of the paper titled The Complexity of Learning Sparse Superposed Features with Feedback, by Akash Kumar
View PDF
HTML (experimental)
Abstract:The success of deep networks is crucially attributed to their ability to capture latent features within a representation space. In this work, we investigate whether the underlying learned features of a model can be efficiently retrieved through feedback from an agent, such as a large language model (LLM), in the form of relative \textit{triplet comparisons}. These features may represent various constructs, including dictionaries in LLMs or a covariance matrix of Mahalanobis distances. We analyze the feedback complexity associated with learning a feature matrix in sparse settings. Our results establish tight bounds when the agent is permitted to construct activations and demonstrate strong upper bounds in sparse scenarios when the agent’s feedback is limited to distributional information. We validate our theoretical findings through experiments on two distinct applications: feature recovery from Recursive Feature Machines and dictionary extraction from sparse autoencoders trained on Large Language Models.
Submission history
From: Akash Kumar [view email]
[v1]
Sat, 8 Feb 2025 01:54:23 UTC (9,229 KB)
[v2]
Tue, 11 Feb 2025 06:57:41 UTC (9,229 KB)
[v3]
Thu, 5 Jun 2025 18:58:48 UTC (16,861 KB)