The Dark Side of AI: Cyberattacks Weaponized by LLMs

The rapid advancement and integration of Artificial Intelligence (AI) into our daily digital lives have been met with both excitement and trepidation. While AI promises to revolutionize industries and improve efficiency, it also presents a new set of challenges, particularly concerning its exploitation by cybercriminals. A recent AI security report from Check Point, a software technologies research firm, sheds light on this growing threat, revealing how hackers are increasingly leveraging AI tools to amplify the scale, efficiency, and impact of their malicious activities.

The Check Point report, the first of its kind, underscores the urgent need for robust AI safeguards as the technology continues to evolve. The report emphasizes that AI threats are no longer hypothetical scenarios but are actively evolving in real-time. As AI tools become more readily accessible, threat actors are exploiting this accessibility in two primary ways: enhancing their capabilities through AI and targeting organizations and individuals adopting AI technologies.

The Lure of Language Models for Cybercriminals

Cybercriminals are diligently monitoring trends in AI adoption. Whenever a new large language model (LLM) is released to the public, these malicious actors are quick to explore its potential for nefarious purposes. ChatGPT and OpenAI’s API are currently favored tools among these criminals, but other models like Google Gemini, Microsoft Copilot, and Anthropic Claude are steadily gaining traction.

The allure of these language models lies in their ability to automate and scale various aspects of cybercrime, from crafting convincing phishing emails to generating malicious code. The report highlights a concerning trend: the development and trading of specialized malicious LLMs tailored specifically for cybercrime, often referred to as "dark models."

The Rise of Dark AI Models

Open-source models, such as DeepSeek and Alibaba’s Qwen, are becoming increasingly attractive to cybercriminals due to their minimal usage restrictions and free-tier accessibility. These models provide a fertile ground for malicious experimentation and adaptation. However, the report reveals a more alarming trend: the development and trading of specialized malicious LLMs tailored specifically for cybercrime. These "dark models" are engineered to bypass ethical safeguards and are openly marketed as hacking tools.

One notorious example is WormGPT, a model created by jailbreaking ChatGPT. Branded as the "ultimate hacking AI," WormGPT can generate phishing emails, write malware, and craft social engineering scripts without any ethical filters. It is even backed by a Telegram channel offering subscriptions and tutorials, clearly indicating the commercialization of dark AI.

Other dark models include GhostGPT, FraudGPT, and HackerGPT, each designed for specialized aspects of cybercrime. Some are simply jailbreak wrappers around mainstream tools, while others are modified versions of open-source models. These models are typically offered for sale or rent on underground forums and dark web marketplaces, making them accessible to a wide range of cybercriminals. The emergence of these dark models underscores the critical need for enhanced security measures and vigilance to counter the escalating threat of AI-powered cyberattacks. The ease with which these models can be acquired and deployed presents a significant challenge to cybersecurity professionals.

Fake AI Platforms and Malware Distribution

The demand for AI tools has also led to the proliferation of fake AI platforms that masquerade as legitimate services but are, in reality, vehicles for malware, data theft, and financial fraud. One such example is HackerGPT Lite, suspected of being a phishing site. Similarly, some websites offering DeepSeek downloads are reportedly distributing malware.

These fake platforms often lure unsuspecting users with promises of advanced AI capabilities or exclusive features. Once a user engages with the platform, they may be tricked into downloading malicious software or providing sensitive information, such as login credentials or financial details. The deceptive nature of these platforms highlights the importance of verifying the authenticity of AI services before engaging with them, and underscores the growing sophistication of cybercriminals in leveraging AI trends for their nefarious purposes.

Real-World Examples of AI-Enabled Cyberattacks

The Check Point report highlights a real-world case involving a malicious Chrome extension posing as ChatGPT that was discovered stealing user credentials. Once installed, it hijacked Facebook session cookies, giving attackers full access to user accounts – a tactic that could easily be scaled across multiple platforms.

This incident underscores the risks associated with seemingly harmless browser extensions and the potential for AI-powered social engineering attacks. Cybercriminals can use AI to create convincing fake websites or applications that mimic legitimate services, making it difficult for users to distinguish between the real thing and a malicious imposter. The use of AI in creating realistic impersonations of trusted entities presents a significant challenge to traditional security measures and emphasizes the need for advanced detection and prevention techniques.

AI’s Impact on the Scale of Cybercrime

"The primary contribution of these AI-driven tools is their ability to scale criminal operations," the Check Point report adds. "AI-generated text enables cybercriminals to overcome language and cultural barriers, significantly enhancing their ability to execute sophisticated real-time and offline communication attacks."

AI allows cybercriminals to automate tasks that were previously time-consuming and labor-intensive. For example, AI can be used to generate thousands of personalized phishing emails in a matter of minutes, increasing the likelihood that someone will fall victim to the scam. This scalability dramatically amplifies the potential impact of cyberattacks, making it more challenging to detect and mitigate threats. Cybercriminals can now launch more frequent and sophisticated attacks with minimal human effort.

Moreover, AI can be used to improve the quality of phishing emails and other social engineering attacks. By analyzing user data and tailoring the message to the individual recipient, cybercriminals can create highly convincing scams that are difficult to detect. This level of personalization enhances the effectiveness of social engineering attacks, as individuals are more likely to trust messages that appear to be relevant to their interests and experiences. The evolving sophistication of AI-powered phishing attacks necessitates a proactive approach to cybersecurity awareness and training.

The Threat Landscape in Kenya

Kenyan authorities are also raising alarms about the rise of AI-enabled cyberattacks. In October 2024, the Communications Authority of Kenya (CA) warned of an increase in AI-enabled cyberattacks – even as overall threats dipped 41.9 percent during the quarter ending September.

"Cybercriminals are increasingly using AI-enabled attacks to enhance the efficiency and magnitude of their operations," said CA Director-General David Mugonyi. "They leverage AI and machine learning to automate the creation of phishing emails and other types of social engineering."

He also noted that attackers are increasingly exploiting system misconfigurations – such as open ports and weak access controls – to gain unauthorized access, steal sensitive data, and deploy malware. The Kenyan experience highlights the global nature of the AI-enabled cybercrime threat, with countries worldwide grappling with similar challenges. The increasing sophistication of these attacks requires a collaborative effort to develop and implement effective cybersecurity strategies.

Kenya is not alone in facing this threat. Countries around the world are grappling with the challenges of AI-enabled cybercrime.

The accessibility of AI tools and the increasing sophistication of cyberattacks are making it more difficult for organizations and individuals to protect themselves. The democratization of AI technology presents a double-edged sword: While offering numerous benefits to society, it also empowers malicious actors to launch more sophisticated and impactful cyberattacks.

The Arms Race to Safeguard AI

As the race to embrace AI accelerates, so too does the arms race to safeguard it. For organizations and users alike, vigilance is no longer optional – it’s imperative. The rapid evolution of AI technology requires a corresponding evolution in cybersecurity strategies. Organizations must constantly adapt their security measures to address the emerging threats posed by AI-enabled cyberattacks.

To mitigate the risks of AI-enabled cybercrime, organizations need to adopt a multi-layered security approach that includes:

  • AI-powered threat detection: Implementing AI-based security solutions that can detect and respond to AI-enabled attacks in real-time. These solutions can analyze network traffic, user behavior, and other data sources to identify suspicious activity and prevent attacks before they cause damage. AI-powered threat detection systems offer a proactive approach to cybersecurity, enabling organizations to stay ahead of evolving threats.
  • Employee training: Educating employees about the risks of AI-powered social engineering attacks and providing them with the skills to identify and avoid these scams. Human error remains a significant vulnerability in cybersecurity, and employee training is essential to reduce the risk of falling victim to social engineering attacks. Training programs should cover topics such as phishing awareness, password security, and safe browsing practices.
  • Strong access controls: Implementing strong access controls to prevent unauthorized access to sensitive data and systems. This includes using multi-factor authentication, implementing role-based access control, and regularly reviewing user permissions. Strong access controls are essential to protect sensitive information from unauthorized access and prevent data breaches.
  • Regular security audits: Conducting regular security audits to identify and address vulnerabilities in systems and infrastructure. Security audits can help organizations identify weaknesses in their security posture and implement corrective measures to mitigate risks. Regular audits are essential to maintain a strong security posture and protect against evolving threats.
  • Collaboration and information sharing: Sharing threat intelligence with other organizations and security providers to improve collective defense against AI-enabled cybercrime. Sharing information about emerging threats and attack patterns can help organizations stay ahead of cybercriminals and improve their ability to detect and respond to attacks. Collaboration and information sharing are essential to build a strong collective defense against cybercrime.
  • Ethical AI development and deployment: Ensuring that AI systems are developed and deployed ethically and responsibly, with safeguards in place to prevent misuse. Ethical considerations are paramount in the development and deployment of AI systems. Organizations must ensure that their AI systems are not used for malicious purposes and that they comply with all applicable laws and regulations.

Individuals can also take steps to protect themselves from AI-enabled cybercrime, including:

  • Being wary of unsolicited emails and messages: Exercising caution when opening emails and messages from unknown senders, especially those that contain links or attachments. Phishing emails remain a common attack vector, and individuals should be cautious when clicking on links or opening attachments from unknown senders.
  • Verifying the authenticity of websites and applications: Ensuring that websites and applications are legitimate before providing any personal information. Cybercriminals often create fake websites and applications that mimic legitimate services to steal user credentials and other sensitive information. Individuals should verify the authenticity of websites and applications before providing any personal information.
  • Using strong passwords and enabling multi-factor authentication: Protecting accounts with strong, unique passwords and enabling multi-factor authentication whenever possible. Strong passwords and multi-factor authentication are essential to protect accounts from unauthorized access.
  • Keeping software up to date: Regularly updating software and operating systems to patch security vulnerabilities. Software updates often include security patches that address known vulnerabilities. Individuals should regularly update their software and operating systems to protect against these vulnerabilities.
  • Reporting suspected scams: Reporting suspected scams to the appropriate authorities. Reporting suspected scams can help law enforcement agencies track down cybercriminals and prevent future attacks.

The fight against AI-enabled cybercrime is an ongoing battle. By staying informed, adopting robust security measures, and working together, organizations and individuals can reduce their risk of becoming victims of these evolving threats. The integration of AI into cybercrime is a rapidly evolving landscape, and constant vigilance and adaptation are essential to stay ahead of the curve.