The recent surge in popularity of Open Source AI platform DeepSeek has raised significant security concerns among organizations. The platform’s best-in-class performance and lower development costs have led to widespread adoption, but experts warn that the risks outweigh the rewards.
DeepSeek’s R1 model has been used by millions of users worldwide, including many organizations, forcing security professionals to grapple with questions about its security and privacy settings. Early research has uncovered data exfiltration to entities backed by the People’s Republic of China government, prompting US government officials and company executives to restrict its use.
Security experts identify four main risks associated with DeepSeek: platform vulnerabilities, neural network biases, input data risks, and output data risks. Platform vulnerabilities can lead to unauthorized access, malware, denial of service, and ransomware attacks. Neural network biases can perpetuate human biases, leading to inaccurate or even dangerous conclusions. Input data risks include the potential for sensitive information to be exfiltrated from organizations through user queries or prompts.
Output data risks are also a concern, as malicious actors can use DeepSeek’s safety controls to jailbreak and disable security features. Researchers have successfully circumvented DeepSeek’s safety controls by using carefully worded prompts to disable internal controls designed to prevent the creation of dangerous devices.
To mitigate these risks, organizations must carefully consider and address each of these areas. This includes implementing robust security measures, monitoring for potential vulnerabilities, and educating users about safe usage practices.
Ultimately, the benefits of generative AI like DeepSeek must be weighed against the security concerns. As one expert notes, “The risks — categorized as platform, brain, input data, and output data risks — must be carefully considered, and ultimately mitigated, in order to enjoy the many benefits of generative AI in a manner that is safe and secure for all organizations and users alike.”
Source: https://www.darkreading.com/vulnerabilities-threats/security-threats-open-source-ai-deepseek