Chinese AI Model DeepSeek Fails Multiple Security Tests

A recent security test on the Chinese generative AI model DeepSeek has revealed alarming results. Cybersecurity experts have warned that the model’s failure rates in jailbreaking and malware generation pose serious risks for users.

According to AppSOC, a Silicon Valley-based security provider, DeepSeek failed multiple benchmarks, including injection attacks. The test results show the model is vulnerable to exploitation, potentially allowing hackers to access sensitive information.

David Reid, a cybersecurity expert at Cedarville University, described the findings as “alarming” and emphasized that consumers should be cautious when considering using such AI models. He warned that cheaper alternatives often come with compromised security features.

Anjana Susarla, an expert in responsible AI at Michigan State University, also expressed concerns about DeepSeek’s limitations. She questioned whether organizations can trust the model to handle sensitive information and expressed skepticism about its suitability for use in chatbots or customer-facing applications.

AppSOC assigned a risk score of 8.3/10 to DeepSeek, recommending against its use in enterprise cases, particularly those involving sensitive data or intellectual property. The results highlight the need for careful evaluation before adopting AI models like DeepSeek.

Source: https://keprtv.com/news/nation-world/deepseek-fails-multiple-security-tests-experts-warn-businesses-ai-china-security-chinese-generative-artificial-intelligence-consumers-code-language-chatbots-risk-sensitive-data-intellectual-property