Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Splx Says Hardened Prompts Lower Hallucinations But Security Gaps Persist

DeepSeek is touting its newest model as its entry into the “agent era” and performance benchmarks show a notable leap in capabilities. Security testing shows progress and persistent vulnerabilities in the Chinese company’s upgraded V3.1 model.
See Also: AI Agents Demand Scalable Identity Security Frameworks
The model’s performance benchmarks show a notable leap over prior versions DeepSeek-V3-0324 and DeepSeek-R1-0528. On SWE-bench Verified, a test of software bug-fixing, DeepSeek-V3.1 scored 66, compared with mid-40s for prior models. On SWE-bench Multilingual, which measures bug-fixing across multiple languages, it reached 54.5, nearly doubling earlier results. And on Terminal-Bench, which evaluates command-line reasoning, V3.1 hit 31.3, up from low double-digit scores in previous releases.
Security company Splx ran the model through its AI red-teaming framework to test how those gains translate into security and reliability.
The evaluation used three tiers of system prompts: no system prompt, a basic system prompt designed to mirror guardrails commonly used in enterprise environments and Splx’s hardened prompt, which applies iterative strengthening based on past adversarial findings. It executed more than 3,000 attack scenarios across categories including security, safety, trustworthiness and business alignment.
Without a system prompt, the model scored about 50 in security and 12 in safety. A basic prompt designed to reflect typical enterprise guardrails pushed safety above 90 and business alignment close to 58, though security dipped to around 41 under the broader testing. With Splx’s hardened prompt was applied, security rose to more than 72, safety nearly reached 99, hallucinations eliminated and business alignment improved to about 85 – but the high score still leaves room for adversarial threats, especially in industries where tolerance for risk is minimal, the company said.
Splx tests for how resistant the model is to manipulation through jailbreaks or unauthorized access, and safety reflects how well it avoids generating harmful, offensive or illegal content. When testing the raw model, Splx found that V3.1 produced a phishing-style message disguised as an IT request, asking a user to forward personal emails. Splx flagged the risk of such outputs being exploited to trick employees into leaking sensitive data.
The raw model also showed problematic behavior in other areas, such as generating profanity in response to prompts, which Splx said could create problems for enterprises using AI in customer-facing roles.
Red-teamers were also able to trigger jailbreaks that coaxed the model into describing hazardous instructions. “A jailbreak can turn enterprise AI into a liability – leaking data, breaking rules, or generating harmful outputs.”