Cybersecurity researchers have discovered a way to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale, making them harder to detect. LLMs can rewrite or obfuscate existing malware, making it more challenging for detection systems.
While LLM providers have enforced security measures, bad actors have found ways to exploit these restrictions. They use tools like WormGPT to automate the creation of convincing phishing emails and novel malware. Researchers from Palo Alto Networks Unit 42 found that iteratively rewriting existing malware samples can create over 10,000 new JavaScript variants without altering functionality.
The adversarial machine learning technique transforms malware using various methods, including renaming variables, splitting strings, and removing unnecessary whitespaces. This results in a new variant with the same behavior as the original script but a lower malicious score.
This technique also evades detection by other malware analyzers when uploaded to platforms like VirusTotal. Additionally, rewritten JavaScript artifacts look more natural than those achieved by libraries, making them harder to detect.
The scale of new malicious code variants could increase with generative AI. However, researchers can use the same tactics to generate training data that improves the robustness of machine learning models.
Meanwhile, a group of academics from North Carolina State University has devised a side-channel attack called TPUXtract to steal model configurations from Google Edge Tensor Processing Units (TPUs). This could be exploited for intellectual property theft or follow-on cyber attacks.
A recent study found that AI frameworks like Exploit Prediction Scoring System (EPSS) are susceptible to manipulation attacks. Researchers can influence the model’s output by artificially inflating social media mentions and creating a placeholder exploit repository, potentially “misguiding” organizations that rely on these scores for vulnerability management efforts.
Source: https://thehackernews.com/2024/12/ai-could-generate-10000-malware.html