The cyber threat landscape is dynamic and new attack vectors appear all the time. Traditional reactive approaches like firewalls and antivirus programs are inadequate today to prevent fast and sophisticated AI-powered attacks.
Therefore, modern digital warfare requires enterprises to adopt innovative approaches to counteract a diverse array of cyber threats. The convergence of Artificial Intelligence (AI) into cybersecurity has marked the dawn of a new era in cyber resilience. It can help organizations predict potential threats, identify suspicious patterns, and analyze vast datasets quickly, enabling businesses to bolster their digital infrastructure proactively.
Despite transforming cybersecurity, some AI capabilities are also being misused by threat actors for malicious gains. According to Gartner, 17% of cyber-attacks will involve generative AI by 2027.
“As technology continues to evolve, so do cybercriminals' tactics. Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike,” said FBI Special Agent in Charge Robert Tripp. “These sophisticated tactics can result in devastating financial losses, reputational damage, and compromise of sensitive data.”
[fs-toc-h2]How Do Threat Actors Use AI?
This article will explore how threat actors use AI to carry out cyberattacks, including social engineering, phishing, malware, prompt injections, and state-sponsored attacks. It further elaborates on the role of AI in a Security Operation Center (SOC) and how AI-equipped defenders can get peace of mind by effectively defeating notorious AI-powered adversaries.
How Do Threat Actors Use AI?
AI isn’t just a silver bullet in the face of cyber threats. Despite the rapid and exciting advancement of AI technology, its misuse can have grave repercussions. Hostile actors leverage this technology to craft phishing campaigns, conduct targeted reconnaissance, generate deepfakes, deploy malware, and more. World Economic Forum’s Global Risk Report 2024 ranks disinformation and misinformation as the top risk posed by AI.
According to Florin Talpes, CEO of Bitdefender, “In the future, the AI component will go up, getting closer to what we should call a 100% autonomous attack that will happen in a fraction of time… The arrival of quantum computing will speed up the trend I am talking about.” He further adds, “In the cybersecurity attacks, the major part that is constantly used by the attackers is phishing… The extortion will go up a lot based on deepfake waves.”
The emergence of AI-based, automated, adaptive, and targeted attacks is constantly broadening the threat landscape. MITRE has developed the MITRE ATLAS framework to deal with AI-based cyber threats. ATLAS is known as Adversarial Threat Landscape for Artificial-Intelligence Systems. In fact, MITRE ATLAS is an extension of MITRE’s widely used MIRE ATT&CK framework, which is a globally accessible knowledge base of threat actors’ tactics and techniques employed in real-world observation.
AI-Equipped Social Engineering Attacks
Fraudsters are constantly weaponizing AI to carry out social engineering attacks, tricking victims into revealing sensitive information. Hackers use these attacks to defraud individuals and organizations of millions of dollars. According to Dark Reading, 35% of U.S. businesses have faced a deepfake incident in the last 12 months, ranking as the second most common security incident in the country.
One of the worst AI-powered social engineering is deepfake – synthetic media which uses AI to create convincing images and audio and video recordings. Deepfake pictures and videos look almost indistinguishable from the real ones.
Deepfake technology is based on Generative Adversarial Networks (GANs) that comprise two algorithms, known as discriminators and generators.
In addition to deepfake images, video, and voice, video conferencing is also happening, says Mike Weil, a digital forensics leader and managing director at Deloitte Financial Advisory Services. He further says that when employees see the CEO on a video call or hear the CFO’s voice, most of them follow directions without verifying the source. This malicious practice is taking social engineering scams to the next level.
Furthermore, Bitdefender reports that malvertising campaigns are tremendously being launched via Meta’s sponsored ad system. This attack is targeting organizations in France, Germany, Italy, Poland, Romania, and more.
Phishing Scam and AI: A Toxic Pair
Scammers use AI tools like Gemini, Copilot, or ChatGPT to craft phishing emails that are contextually appropriate, grammatically correct, and translated into the victims’ local language. Using prompts, bad guys can mimic a particular tone or writing style and draft phishing emails following the recipient’s behavior or response.
According to the Adversarial Misuse of Generative AI report by Google Threat Intelligence Group (GTIG), more than 30% of Iranian APT groups used Google Gemini, an AI model for responding to prompts, to craft material for phishing campaigns.
In addition, threat actors misuse Natural Language Processing (NLP) and Generative AI (GenAI) to cultivate phishing messages and increase the likelihood of tricking a victim.
“AI, so far, has not been a game changer for offensive actors,” Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, told VOA. “It speeds up some things. It gives foreign actors a better ability to craft phishing emails and find some code. But has it dramatically changed the game? No.”
AI-Generated Malware
AI-generated malware can play a devil with the victim organization’s digital infrastructure. It involves machine learning algorithms to autonomously adapt and improve to stay undetected. This malicious AI program can adjust its attack vector and execute decisions in real-time.
AI malware has several characteristics that include real-time adaptation, obfuscation techniques, polymorphism, and impersonation capabilities. Impersonation enables AI malware to mimic existing cybercriminals and known malware spotlights with accuracy.
Polymorphism allows AI malware to automatically alter its code with each infection or replication. Conventional signature-based techniques cannot detect and fix this malware due to its continuous mutation. Obfuscation is achieved through encryption, inserting dead code, and substituting instructions in the codebase. Recent examples of AI-generated malware include Crimson Sandstorm, Emerald Sleet, and Forest Blizzard.
Prompt Inject Attack
Prompt injection attack takes advantage of a Large Language Model (LLM), a key feature within GenAI, to respond to the victim’s natural language instruction. The security risk of the prompt injections can be understood from the fact that it has been listed as the number one security vulnerability on the OWASP Top 10 for LLM apps.
Threat actors disguise malicious inputs as legitimate prompts, manipulating GenAI systems into leaking critical information like Personally Identifiable Information (PII).
GenAI apps that access sensitive data and trigger actions via API integration are vulnerable to prompt injection attacks. Imagine a virtual assistant powered by an LLM that can edit files and compose emails. A cybercriminal, using the right prompt, could manipulate this assistant into sending private documents.
The prompt injection vulnerability exists because both system prompts and user inputs are formatted as natural-language text, making it hard for the LLM to differentiate between them based on data type alone. As a result, it relies on past training and prompts themselves to determine actions. If a hacker crafts input that resembles a system prompt, the LLM may ignore the instructions of developers and follow the hacker's commands.
The first prompt injection attack was discovered by the data scientist – Riley Goodside. He utilized a simple LLM-driven translation application to demonstrate how the attack works. The graphical representation of this attack is given below.
Source: IBM
State-Sponsored AI Attacks
The use of AI for malicious purposes is far beyond exfiltrating organizations’ security parameters. Advanced Persistent Threats (APTs) groups backed by governments misuse AI to compromise national security, acquire military secrets, and undermine the stability of opponents. These attacks are well-funded, organized, and sophisticated.
GTIG’s Adversarial Misuse of Generative AI report revealed that state-sponsored APT groups from more than 20 countries used Google Gemini, an AI model for responding to prompts, to gather information about potential victims, research publicly known vulnerabilities and specific CVEs, and enable operations in the aftermath of a data breach, such as evading defense mechanism of a potential target.
APT43 from Iran used Gemini to craft phishing scams, conduct reconnaissance on security professionals and enterprises, and generate content with security themes. Chinese APT gangs also utilized Gemini to perform data exfiltration, privilege escalation, lateral movement, and defense evasion. In addition, North Korea and Russia were also engaged in using Gemini for their malicious gains.
The report further added that Iran-backed Information Operation (IO) actors employed Gemini to conduct general research and create and manipulate content. They translated existing material like news articles to mix original and borrowed content. IO actors also localize the content to create a human-like translation and make the text look like a native-level English speaker.
APT actors also attempted to bypass Gemini’s safety controls using Jailbreak, a type of prompt injection attack that causes an AI model to behave in a way that they have been trained to avoid. Examples include leaking sensitive information. Google has outlined these threats in its Secure AI Framework risk taxonomy.
Three Microsoft Threat Intelligence experts, namely Sherrod DeGrippo, Fanta Orr, and Jeremy Dallman say that state-sponsored groups from China and Russia are leveraging AI in disinformation campaigns.
{{post-cta}}
[fs-toc-h2]How AI Is Empowering Cybersecurity?
AI-powered cybersecurity promises a paradigm shift in how organizations protect their critical data and information assets. It helps enterprises stay one step ahead of cybercriminals by responding to threats in real-time, unlike the traditional security solutions that are based on static signatures and predetermined rules.
AI-equipped systems can help detect potential threats and anomalies in real-time by utilizing Deep Neural Networks (DNNs), Deep Learning (DL), Natural Language Processing (NLP), and Machine Learning (ML) algorithms. Doing so helps security teams react efficiently and quickly.
AI is a double-edged sword. Despite its dark side, AI-powered cybersecurity can prove to be a cornerstone for organizations’ cyber defense. According to IBM’s Cost of Data Breach Report 2024, AI and automation technologies are transforming cybersecurity. AI can significantly lower the average data breach cost. Businesses that didn’t use AI and automation had average breach costs of $5.72 million, while those extensively using AI and automation had average costs of $3.84 million. These organizations are saving $1.88 million. The following graph shows the cost of a data breach for those using AI and automation vs those not using them in 2023 and 2024.
Source: IBM
[fs-toc-h2]The Role of AI-Powered Security Operation Center (SOC)
Do you ever imagine you have an assistant at your SOC — one who never tires, never takes long breaks, has no days off, and never calls in sick? Your wish can become true if you move forward with the power of AI in your SOC. It will keep you one step ahead of threat actors by proactively hunting even AI-driven threats and attacks with a more intelligent and automated approach.
In the phenomenon of AI vs AI, the defender’s side is strong with the capability to defeat even advanced adversaries. Traditional SOC cannot effectively deal with AI-driven threats and the high volume and complexity of security data.
Dealing with a massive amount of data with conventional SOC is a daunting task. Shailesh Rao, a president of Cortex at Palo Alto Networks, states that your company may encounter terabytes or petabytes of data daily, and the only way you can analyze that effectively is by using the latest advances in AI and ML.
In fact, AI-driven SOCs not only help manage big data but also detect and respond to threats in real-time with great accuracy and unprecedented speed. This technology helps organizations of all sizes secure their systems, networks, and data and keep hackers at bay.
The subsequent sections elaborate on how AI-powered SOC can help organizations strengthen their cybersecurity defense.
[fs-toc-h2]AI Technologies: A Force Multiplier in Your SOC
With the power of AI and automation, your SOC analysts will be freed from performing repetitive, manual and mundane tasks. However, it doesn’t mean that AI will completely replace the need for security professionals in your SOC. It would likely be a mistake to operate a SOC without human intervention or oversight. The organizations having both AI and human analysts can better reduce risks and provide great customer satisfaction.
In addition, SOC analysts can make strategic decisions with the help of AI-driven insights and recommendations. For instance, when a suspicious behavior is identified, AI will provide the context by correlating it with historical data and a known threat intelligence system. Doing so can help SOC analysts understand the threat (e.g., whether it’s real or a false positive) and determine the response accordingly.
In the event of detecting a false positive, your SOC analyst will feed this information into the AI model that will adjust its algorithm to prevent similar false positives again.
Addressing Phishing Campaigns
AI-powered SOC features Deep Learning (DL), a subset of ML, for combating phishing campaigns. As a matter of fact, DL is very effective in phishing detection by automatically extracting relevant features from email content and metadata.
Traditional methods like rule-based filters and blacklists often fail to adapt to evolving attacker tactics, resulting in high false positives and negatives. On the contrary, DL offers the ability to learn complex patterns, enhancing the accuracy and efficiency of identifying phishing emails.
The DL fails fraudsters in their phishing campaigns by involving four models, including:
- Bidirectional Long Short-term Memory (Bi-LSTM)
- Long Short-term Memory (LSTM)
- Recurrent Neural Networks (RNNs)
- Convolutional Neural Networks (CNNs)
Combating Social Engineering
Currently, a modern SOC uses AI in social cybersecurity to tackle security issues on social media platforms, online communities, and other social networks.
AI-powered SOC helps detect and prevent various threats on social media, such as phishing, spamming, account takeovers, and the spread of fake news. ML algorithms process vast amounts of data to identify potential threats and suspicious behaviors.
Social media platforms use AI-driven content moderation systems to automatically detect and remove harmful content, such as graphic violence, harassment, and hate speech. NLP models analyze text-based content, while computer vision techniques assess videos, voice, and images, defeating the menace of deepfake.
AI-driven anomaly detection systems monitor an organization’s network traffic and users' or employees’ activities to spot suspicious or unusual behavior. This aids in detecting threats like account compromises, in real-time.
Additionally, AI can improve privacy controls and safeguard employees' and users’ data on social media platforms, such as Facebook, LinkedIn, Instagram, or Twitter. ML algorithms can analyze behaviors, user preferences, and past activities to offer personalized privacy recommendations and identify potential privacy violations.
Automated Threat Intelligence
AI-powered SOC can help collect and analyze data from multiple sources like network traffic, endpoint logs, and threat intelligence feeds. With automated data collection, AI can quickly identify correlations that SOC analysts might miss, ensuring early and accurate detection of threats.
More importantly, AI-powered SOC features threat hunting, which is a modern threat intelligence concept equipped with AI, ML, and big data analytics. Threat hunters in a SOC use the power of these technologies to proactively and iteratively search for threats before they become realized. Proactive risk detection and rapid response keep the hackers at bay. Moreover, your security teams can automate ethical hacking tasks and learn from existing data sets to find potential threats and vulnerabilities.
Plus, threat hunters can analyze big data, find anomalies, analyze behavior, and detect malicious patterns that can lead to a potential data breach. AI will automate these tasks, allowing threat hunters to concentrate on more strategic processes, such as prioritizing alerts and preparing responses.
Predictive analytics help forecast potential threats based on emerging trends and historical data.
Despite the strength of AI, the role of threat hunters and security analysts in a SOC is indispensable. These professionals will interpret AI-generated data, comprehend the context, and make decisions that AI cannot make.
Advanced Behavioral Analytics
AI-driven behavioral analytics will be vital in modern SOCs. It leverages User and Entity Behavior Analytics (UEBA) technology to enable SOC analysts to monitor user and endpoint behaviors that could potentially indicate threats.
Advanced AI and ML algorithms in UEBA can detect subtle changes indicating lateral movement, insider threats, or APTs by establishing a baseline for network, endpoint, and user behaviors. Detecting threats in this manner strengthens proactive security.
AI-driven SOC continually refines its understanding of normal behavior. Doing so increases its capability to mitigate false positives and enhances the accuracy of threat identification.
AI-driven SOC Platform Security Integration
AI-driven SOC integrates with many other tools to provide your SOC analysts with a centralized view on a single dashboard. Centralized management functions will simplify security operations.
Traditionally, without integration, security teams feel overwhelmed due to operating different tools on multiple systems going back and forth many times a day.
Integrated SOC will give your security teams peace of mind by lowering the burden and allowing them to operate all integrated tools on a single console.
Advantages of AI in a SOC
The modern SOC can reap numerous benefits from the use of AI. One of the main advantages is the timely and swift identification of threats with robust response capabilities. Other benefits include:
- Streamlined operations
- Scalability
- Improved efficiency and accuracy
- Lower operational costs
- Less reliance on human analysts
- Filling cybersecurity skill gaps
[fs-toc-h2]Astro Information Security – Your Best Bet
Astro Information Security offers AI-powered cybersecurity services that will empower your SOC and security teams and transform your security operations with proactive threat detection and unprecedented automation.
The advanced AI-equipped solution will keep your organization protected against AI-driven attacks by leveraging behavior analytics, ML, and other AI methodologies.
Founded by ex-NASA and NSA cybersecurity professionals, Astro Information Security gives you a mature security platform to protect your business around the clock. Ensure nation-state-grade security with our MXDR, offensive security and advisory services.
[fs-toc-h2]The Final Word
It’s evident that traditional security methods are ineffective. While useful in some cases, they fall behind the ever-evolving AI-equipped strategies used by cyber pests. Cybersecurity powered by AI fills this gap by providing proactive and adaptable approaches. Your security teams can uncover new threats and vulnerabilities and even predict future attack vectors by analyzing data from multiple platforms, including social media, the dark web, and other online sources.
AI-powered security defenders and SOC analysts can offer robust protection against various AI-driven threats like social engineering and phishing, malware, prompt injection attacks, and state-sponsored attacks. AI enables SOC analysts to proactively manage and mitigate threats. With faster response, critical incidents are resolved in just minutes.
AI-driven cybersecurity fixes vulnerabilities on time before they become a nightmare. To this end, this game-changer approach integrates Natural Language Processing (NLP), Deep Learning (DL), advanced data analytics, automation, and Machine Learning (ML) algorithms to build adaptive, dynamic, and multilayered cybersecurity defense.
Transitioning to a proactive AI-driven SOC model is a major step forward in cybersecurity defense. Equipped with intelligence, adaptability, machine-driven capabilities, automated threat intelligence, and advanced behavioral analytics, modern SOCs require minimal analyst intervention while maintaining human oversight. Adopting AI technology is crucial for enhancing organizational resilience and marks a key innovation in SOC methodologies.
Get started on your security today
Let us know how we can help you stay on track with your cybersecurity. We’ll get back to you in 24 hours or sooner.
