View a PDF of the paper titled Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks, by Yixiao Xu and 6 other authors
View PDF
HTML (experimental)
Abstract:Developing high-performance deep learning models is resource-intensive, leading model owners to utilize Machine Learning as a Service (MLaaS) platforms instead of publicly releasing their models. However, malicious users may exploit query interfaces to execute model extraction attacks, reconstructing the target model’s functionality locally. While prior research has investigated triggerable watermarking techniques for asserting ownership, existing methods face significant challenges: (1) most approaches require additional training, resulting in high overhead and limited flexibility, and (2) they often fail to account for advanced attackers, leaving them vulnerable to adaptive attacks.
In this paper, we propose Neural Honeytrace, a robust plug-and-play watermarking framework against model extraction attacks. We first formulate a watermark transmission model from an information-theoretic perspective, providing an interpretable account of the principles and limitations of existing triggerable watermarking. Guided by the model, we further introduce: (1) a similarity-based training-free watermarking method for plug-and-play and flexible watermarking, and (2) a distribution-based multi-step watermark information transmission strategy for robust watermarking. Comprehensive experiments on four datasets demonstrate that Neural Honeytrace outperforms previous methods in efficiency and resisting adaptive attacks. Neural Honeytrace reduces the average number of samples required for a worst-case t-Test-based copyright claim from 193,252 to 1,857 with zero training cost. The code is available at this https URL.
Submission history
From: Yixiao Xu [view email]
[v1]
Thu, 16 Jan 2025 06:59:20 UTC (4,198 KB)
[v2]
Fri, 17 Jan 2025 06:50:23 UTC (4,198 KB)
[v3]
Wed, 4 Jun 2025 02:14:47 UTC (5,463 KB)