Artificial intelligence has an undeniable dark side. Bad actors are increasingly leveraging AI to automate, scale, and enhance their tactics, creating sophisticated threats that are harder to detect than ever. This article is intended to give you an overview of how they try to use AI against you. 

 

Scaling and Automating Cybercrime 

AI has significantly lowered the barrier to entry for complex cybercrimes, allowing individuals with few technical skills to execute sophisticated attacks that previously required extensive knowledge. 

  • Advanced Malware:  Threat actors use machine learning to develop evasive malware that can adapt its code and strategy in real time to avoid detection. 
  • Vulnerability Exploitation:  Machine learning models can analyze vast datasets, including vulnerability reports, to identify weaknesses in systems and develop tailored exploit ideas. 
  • Automated Attacks:  AI agents can automate entire attack lifecycles, from initial reconnaissance to the final exploitation, making attacks more efficient and widespread. 

 

Enhancing Social Engineering and Fraud 

One of the most immediate and impactful uses of AI by bad actors is in social manipulation, where the technology helps build trust and bypass human safeguards. 

  • Hyper-Targeted Phishing:  Generative AI allows criminals to create personalized, legitimate-sounding phishing emails and messages at scale, making it difficult for victims to discern real communications from malicious ones. 
  • Deepfakes:  Highly realistic synthetic media, including deepfake videos and cloned voices, are used to impersonate executives or public figures, facilitating high-value fraud schemes like business email compromise and extortion. 
  • Pig Butchering:  Tricking victims into making financial contributions to the bad actors.  AI’s ability to automate responses and inject false emotion allows scammers to manage a higher volume of “pig butchering”, building believable, long-term relationships with multiple victims simultaneously. 

 

Information Manipulation and Propaganda 

Authoritarian and other malicious actors are using generative AI to manipulate the information space on a global scale. 

  • Disinformation Campaigns:  AI-generated text, images, and video can create a flood of fake news and propaganda, making it nearly impossible to distinguish credible information from falsehoods. 
  • Influence Operations:  Threat actors create networks of social media accounts using AI-generated personas and content to push specific narratives and sow widespread unrest. 

 

Prompt Injection 

Sometimes the AI you use yourself can be the target of bad actors.  AI prompt injection occurs when a user’s input overrides a Large Language Model’s (LLM) original instructions, causing it to perform unintended actions. This is possible because LLMs often treat user input and developer instructions similarly. 

  • Direct Prompt Injection:  Threat actors pull up the AI prompt and directly input malicious instructions.  Typically, this involves either instruction override or role-playing to circumvent the AI’s safeguards.  Prompting the AI with statements like “Ignore all previous instructions and do X” or “Pretend you are a cybersecurity expert. How would you perform X?” are examples of direct prompt injection. 
  • Indirect Prompt Injection:  These prompts are hidden within data sources, such as webpages, documents, and emails, processed by the LLM.  The AI follows the instructions unknowingly when it interacts with these sources.  Examples include hiding text on a resume to force AI hiring tools to rate the applicant favorably regardless of the resume’s content.  Embedding a prompt in a webpage to cause AI-using web browsers to potentially exfiltrate data.  Or hiding a prompt in a picture that only becomes visible when an AI is processing the image. 
  • Code Injection:  The threat actors perform code injection when they trick AI into creating or running malicious code.  If the AI is integrated with other systems it is possible it can execute commands like “Reset all accounts and notify the attackers”. 

 

The Ongoing Battle and What You Can Do 

Security firms, AI developers, and government agencies are working to detect and counter these evolving threats. This includes leveraging AI for defensive purposes, such as detecting malicious behavior and implementing safety guardrails on AI models. 

NGT recommends the following actions: 

  • Verify via a Second Channel:  If you receive an urgent request for money or sensitive data—even if it sounds like a trusted colleague or family member—verify it through a separate, known channel (e.g., a phone call to a saved number). 
  • Slow Down and Be Skeptical:  AI scams rely on artificial urgency to impair judgment. Look for digital flaws in videos (poor lip-syncing) or unusual requests that deviate from normal behavior. 
  • Adopt Phishing-Resistant MFA:  Use multi-factor authentication (MFA) across all accounts. Whenever possible, use hardware security keys or passkeys instead of text message SMS-based codes. 
  • Protect Your Digital Footprint:  Revisit social media settings to switch to “private” mode to prevent AI from scraping your voice or personal details for impersonation.  

 

Taking steps to ensure AI usage does not become a liability will make it shine as the productivity enhancer it is supposed to be. 

 

As always, NGT is here to help!
Contactngthelp.comwith questions.