A massive data leak at AI startup DeepSeek has exposed more than just chat logs and secret keys — it’s pulled the curtain back on a growing risk for South African companies using generative AI tools without clear policies or safeguards in place.
The breach, which involved an unsecured ClickHouse database spilling over a million rows of sensitive backend data, highlights a hard truth: AI systems are only as secure as the teams that deploy them — and right now, legal oversight is struggling to keep up.
The global breach, local impact
AI innovation is outpacing legislation around the world. South Africa may not yet have AI-specific regulations, but local companies are still bound by the Protection of Personal Information Act (POPIA). And if your team is feeding sensitive business, customer, or employee information into tools like ChatGPT, you might already be skating on thin ice.
International watchdogs have responded fast. Irish and Italian regulators have launched formal investigations into DeepSeek’s failure to secure user data — and these aren’t toothless threats. Global precedent shows that non-compliance with data laws, even by third-party tools, can trigger fines and reputational damage.
POPIA and the AI grey zone
Here’s the crux: POPIA doesn’t specifically name AI, but its rules still apply. If an employee pastes personal data into a chatbot — intentionally or not — it could count as a data breach under local law. And because many generative AI tools store, index, or even use inputs to train future models, that info may never truly be private again.
South African businesses urgently need to bridge this regulatory blind spot. As employees increasingly rely on AI to generate reports, handle queries, or brainstorm ideas, organisations must take proactive steps to protect sensitive information.
What companies should be doing right now
Legal experts from Cliffe Dekker Hofmeyr suggest four key moves to stay compliant and secure:
Draft a dedicated AI usage policy: This should cover which tools are allowed, when data can be shared, and how consent and privacy are handled.
Train your teams continuously: Keep everyone — from interns to execs — updated on the risks of AI and what responsible use looks like.
Have an incident response plan: Know what to do if there’s a leak, and ensure that breaches are reported and addressed quickly.
Audit your AI footprint: Monitor which tools are being used, how, and by whom — and shut down shadow AI use before it becomes a problem.
Employees aren’t off the hook either
Workers should be clear on what’s allowed and what’s not when it comes to AI. That means:
Only using approved tools
Never entering confidential or client information into public AI platforms
Getting management approval before integrating new tools into workflows
Reporting any suspicious behaviour or vulnerabilities immediately
The bottom line
The DeepSeek breach is a cautionary tale. AI isn’t inherently dangerous — but the way we use it can be. If businesses want to unlock AI’s potential without breaking the law or their customers’ trust, governance and guardrails need to catch up. Fast.
POPIA may not have been written with AI in mind, but it still applies. In today’s digital workplace, treating AI with the same level of scrutiny as any cloud service or software platform is not just smart — it’s essential.