đ What itâs about:
As quantum machine learning (QML) evolves, hybrid quantum-classical models (HQMLs) have become central. But theyâre hard to interpret â making decisions via complex transformations that span both classical and quantum domains. This paper introduces QuXAI, a framework designed to make HQMLs more interpretable and trustworthy. At its core is Q-MEDLEY, a novel explainer that attributes global feature importance while respecting the hybrid data flow from classical inputs through quantum encodings to classical learners.
đ§ Key contributions:
â Q-MEDLEY: An explainer combining Drop-Column and Permutation Importance â tailored for HQMLs using quantum feature encoding.
đ§Ș Full pipeline (QuXAI): Data prep â HQML model training â explanation â visualization â all adapted to quantum settings.
đ Visual Explanations: Clear bar chart visualizations for feature importance help researchers understand what matters.
đ Evaluated against classical ground truths using interpretable models (e.g., decision trees) to validate explanation fidelity.
đ§Ș Ablation studies confirm that interaction-aware and adaptive components boost Q-MEDLEYâs performance.
đ Why it matters:
HQMLs are promising but opaque. QuXAI is a critical step toward trustworthy, interpretable, and safe quantum AI. Understanding which classical features drive decisions after quantum transformation is key for debugging, trust, and scientific insight.