Our Blog | ActZero

Top 5 Abuses of AI and LLMs | ActZero

Written by Adam Winston | Jul 2, 2024 5:50:06 PM

 

With AI investments soaring, evidenced by the AI 100 raising $28 billion in venture capital deals and Nvidia becoming the world's most valuable company in June, we are undoubtedly in the summer of AI. The rapid growth of AI companies, along with 700,000 open-source models and 150,000 datasets, underscores the urgent need to mitigate the risks these tools introduce.

As we explore the capabilities of AI, it's important to recognize the current top five abuses of large language models (LLM) and AI impacting cybersecurity:

1. Synthetic Media Leads to Deep Fakes

The rise of deepfakes for social engineering now targets elections, executives, and financial companies. In February 2024, a finance worker lost $25 million in a remote meeting, conned by attackers using deepfakes of their co-workers. Executing such an attack is surprisingly easy. Open-source tools like Avatarify allow anyone to use a LinkedIn photo to quickly transform into a favorite user or even a celebrity. While many believe they wouldn't fall for deepfake technology due to its grainy nature and visible inconsistencies, the technology is advancing rapidly. As LLMs improve, it becomes increasingly difficult for users to detect deepfakes with the naked eye. Governments are scrambling to adopt regulations, leaving IT security teams to find novel ways to combat this threat.

Voice cloning adds another layer of complexity, enabling the creation of convincing voice replicas with just a few seconds of audio. This technology is fascinating yet terrifying, as the market has yet to provide a solid fix. While security awareness is improving, tools built into apps like Zoom or Teams for verifying participant authenticity are still lacking.

2. Bypassing Software Restrictions with Jailbreaking

GPT models can have their security filters bypassed through various methods, known as jailbreaking.This demonstrates that the power of a LLM isn't just about what it can do but what it's willing to do, based on a series of security filters applied at the prompt level.  Popular jailbreaks like DAN exploit the model's core directive to respond accurately. If you take a LLM like ChatGPT and input a very complex query, you can strip away many security controls. For example, if you enter a jailbreak prompt and then ask it how to make meth, it might provide a complex recipe. As models become more sophisticated, the risks increase. Bypassing filters will become harder to simulate with larger files, and new black markets like W0rmGPT will pose additional threats.

3. Extortion Through Software for Ransomware

Malware, including ransomware, can be generated as source code with these tools. The ease of creating custom malware increases the threat. With the extensive codebases of major software and the prevalence of vulnerabilities, real-time development of zero-day malware is a significant concern.

4. The Rise of Dark LLMs

What if there were LLMs that didn't require any security controls to start using them? You wouldn't have to jailbreak them or carefully craft queries. These dark LLMs, like W0rmGPT, are on the rise. Improved language models enable more convincing phishing attempts, fake websites, and multimedia scams. With a variety of open-source language models available, attackers can modify them to create their own malware-specific LLMs. Future defenses will require real-time biometric authentication and source verification technologies.

5. Unauthorized Data Transfer with Data Exfiltration

ChatGPT stores prompts for performance measurement, posing a risk of data leakage. Past incidents, like Apple contractors reviewing Siri data and Samsung engineers uploading blueprints to ChatGPT, highlight this vulnerability. As LLMs become integrated into more sensitive applications, the risk of data leakage will grow. Companies like Apple and Microsoft are embedding these copilots into operating systems, allowing access to personal photos, documents, and other private data on devices. This integration introduces the risk of unauthorized access, either through backend vulnerabilities or by attackers breaking security controls.

The rapid development of AI technologies presents significant risks that must be mitigated to protect organizations and individuals. ActZero's MDR solution offers a comprehensive defense against these emerging threats, combining automation with human expertise to provide low noise, high fidelity alerts, and swift responses. As we navigate the summer of AI, robust cybersecurity measures are more crucial than ever.