🚀 What it’s about:
As quantum machine learning (QML) evolves, hybrid quantum-classical models (HQMLs) have become central. But they’re hard to interpret — making decisions via complex transformations that span both classical and quantum domains. This paper introduces QuXAI, a framework designed to make HQMLs more interpretable and trustworthy. At its core is Q-MEDLEY, a novel explainer that attributes global feature importance while respecting the hybrid data flow from classical inputs through quantum encodings to classical learners.
🧠 Key contributions:
✅ Q-MEDLEY: An explainer combining Drop-Column and Permutation Importance — tailored for HQMLs using quantum feature encoding.
🧪 Full pipeline (QuXAI): Data prep → HQML model training → explanation → visualization — all adapted to quantum settings.
📊 Visual Explanations: Clear bar chart visualizations for feature importance help researchers understand what matters.
🔍 Evaluated against classical ground truths using interpretable models (e.g., decision trees) to validate explanation fidelity.
🧪 Ablation studies confirm that interaction-aware and adaptive components boost Q-MEDLEY’s performance.
📌 Why it matters:
HQMLs are promising but opaque. QuXAI is a critical step toward trustworthy, interpretable, and safe quantum AI. Understanding which classical features drive decisions after quantum transformation is key for debugging, trust, and scientific insight.