ActiveKD integrates active learning with knowledge distillation using large vision-language models to efficiently select diverse, unlabeled samples for annotation.
Knowledge distillation (KD) is a widely used framework for training compact,
task-specific models by leveraging the knowledge of teacher models. However,
its application to active learning (AL), which aims to minimize annotation
costs through iterative sample selection, remains underexplored. This gap stems
from the fact that KD typically assumes access to sufficient labeled data,
whereas AL operates in data-scarce scenarios where task-specific teacher models
are often unavailable. In this paper, we introduce ActiveKD, a framework that
integrates AL with KD by leveraging the zero- and few-shot capabilities of
large vision-language models (VLMs). A key aspect of ActiveKD is the structured
prediction bias of VLMs — i.e., their predictions form clusters in the
probability space. We regard this structure as an inductive bias of the teacher
model, capturing generalizable output patterns beneficial to student learning.
To exploit this bias, we propose Probabilistic CoreSet (PCoreSet), a selection
strategy that maximizes coverage in the probability space rather than the
feature space. PCoreSet strategically selects categorically diverse unlabeled
samples, facilitating more efficient transfer of teacher knowledge under
limited annotation budgets. Evaluations on 11 datasets show that PCoreSet
consistently outperforms existing selection methods within the ActiveKD
framework, advancing research at the intersection of AL and KD.