Harmonic Security identifies widespread, unsanctioned use of high risk GenAI tools including DeepSeek, Moonshot Kimi, Manus, Baidu Chat, and Qwen
LONDON & SAN FRANCISCO, July 17, 2025–(BUSINESS WIRE)–Harmonic Security has today released new research revealing widespread use of Chinese-developed generative AI (GenAI) applications within the workplace. The behavioral analysis, conducted over 30 days across a sample of approximately 14,000 end users in the United States and United Kingdom finds that 7.95%, or nearly one in 12 employees used at least one Chinese GenAI tool.
Among the 1,059 users who engaged with Chinese GenAI tools, Harmonic Security detected 535 incidents of sensitive data exposure. The majority of exposure occurred via DeepSeek, which accounted for roughly 85% of the total, followed by Moonshot Kimi, Qwen, Baidu Chat and Manus.
In terms of what sensitive data was exposed, code and development artifacts represented the largest category, making up 32.8% of the total. This included proprietary code, access keys, and internal logic. This was followed by mergers & acquisitions data (18.2%), personally identifiable information (PII) (17.8%), financial information (14.4%), customer data (12.0%), and legal documents (4.9%). Engineering-heavy organizations were found to be particularly exposed, as developers increasingly turn to GenAI for coding assistance, potentially without realizing the implications of submitting internal source code, API keys, or system architecture into foreign-hosted models.
Alastair Paterson, CEO and co-founder Harmonic Security comments: “All data submitted to these platforms should be considered property of the Chinese Communist Party given a total lack of transparency around data retention, input reuse, and model training policies, exposing organizations to potentially serious legal and compliance liabilities. But these apps are extremely powerful with many outperforming their US counterparts, depending on the task. This is why employees will continue to use them but they’re effectively blind spots for most enterprise security teams.”
Paterson continues: “Blocking alone is rarely effective and often misaligned with business priorities. Even in companies willing to take a hardline stance, users frequently circumvent controls. A more effective approach is to focus on education and train employees on the risks of using unsanctioned GenAI tools, especially Chinese-hosted platforms. We also recommend providing alternatives via approved GenAI tools that meet developer and business needs. Finally, enforce policies that prevent sensitive data, particularly source code, from being uploaded to unauthorized apps. Organizations that avoid blanket blocking and instead implement light-touch guardrails and nudges see up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%.”
Story continues