AI fraud detection: why traditional defenses fail against modern attacks
Scams and phishing attacks are at an all-time high, and fraud detection tools are struggling to keep up with how fast threats are evolving. Today’s fraudsters are already using AI tools to launch phishing attacks, create convincing spoofed websites, and automate large scale scam campaigns. All of which make fraud faster to execute, harder to detect, and easier to scale.
What used to take weeks of work to build fully functional spoofed websites, assemble target lists, and craft convincing emails can now be done in less than 30 minutes using free AI tools.
To demonstrate just how real this threat is, we recently walked a group of merchants through a live phishing simulation. In under 20 minutes, we stood up a fully functional replica of a major retailer’s website and put together an automated email campaign with branded emails that bypassed spam filters.
This is what risk teams at merchants and banks are up against. Attacks like these are no longer rare or resource-intensive. They are fast, repeatable, and increasingly difficult for customers to recognize.
Every brand is now just one AI prompt away from a large scale phishing attack.
How AI is scaling modern scams
AI is making phishing scams faster to launch, easier to personalize, and much more scalable. But the biggest shift is around accessibility. Attacks that once required specialized skills and coordinated infrastructure can now be carried out by anyone with intent and an internet connection.
With just a few screenshots and basic prompts, an attacker can generate branded emails, clone a website, and collect credentials in real time. There’s no need to write code, source stolen assets, or buy tools from shady forums. Everything needed to launch a convincing campaign is already available, and most of it is free.

While many AI providers have introduced safeguards to prevent abuse, those protections are easy to bypass. Changing the way a prompt is written or switching to a different tool often avoids detection. The safeguards are well intentioned, but they aren’t stopping what’s already happening in the wild.
AI phishing and fraud attack toolkit
The tools fraudsters are using to launch these scams aren’t obscure or specialized. They’re the same platforms your marketing, sales, and product teams use to ship campaigns, automate workflows, and move faster. That’s what makes them hard to flag. The infrastructure behind a phishing attack can look nearly identical to the stack behind a product launch.
Website cloning (V0, Lovable, Replit)
Generates professional websites and full application flows from text prompts. No coding required. Perfect for cloning login pages, checkout screens, and support portals in minutes.
Phishing email (ChatGPT, Claude)
Writes fully branded, grammatically clean phishing emails that look and sound like they came from your own team. These messages are believable at a glance and pass basic spam filters.
Database (Airtable)
Not technically an AI tool, but is used as a cloud database to store stolen data including names, email addresses, device info, and even payment details. Fraudsters will combine this with a tool like Zapier or n8n to trigger alerts and workflows after a user’s info has been harvested.
Workflow automation (N8n, Zapier)
An automation platform that connects apps and orchestrates workflows. Used by attackers to process stolen data, send bulk emails, and manage phishing campaigns from end to end.
Target lists (Clay, Apollo)
Finds verified email addresses and contact data based on any criteria, such as job title, industry, company, or domain.
Email delivery (SendGrid, Postmark)
These legitimate email delivery services can be used to send emails at scale while avoiding basic spam detection.
Branding and logos (Milled, Reallygoodemails)
These are public search engines for marketing emails. Fraudsters can use them to find real brand email templates, which are then copied and repurposed for phishing.
Voice cloning (PlayAI, ElevenLabs)
Can be used to generate deepfake voice messages that can impersonate customer service or bank representatives. These tools are increasingly used to bypass voice verification systems being added by large banks for phone-based account access. They can also be used to socially engineer victims by mimicking the voice of a loved one, an executive, or perhaps public figure.

Why AI fraud attacks are hard to detect
What makes AI fraud so difficult to detect is that it isn’t just one tactic. It’s a sequence. A phishing email leads to a fake website, which leads to stolen credentials, which leads to a well-scripted voice call or a hijacked session. Each step can look harmless on its own, because the entire attack is designed to blend in.
Phishing makes the challenge ever trickier because they often start outside your system, which means traditional fraud detection tools can miss the first signs of an attack. You may not know anything is wrong until a customer asks for a refund for an order you have no history of happening, or there’s a spike in customer service complaints about unauthorized transfers.
Even when the attack reaches your platform, it can be hard to spot. In some cases, scammers are actively coaching victims in real time, walking them through how to bypass risk checks, complete transactions, or skip verification steps.
While this may sound like an obvious red flag from the outside, the victim often doesn’t see it that way. By the time they’re being guided through your flow, they may have already spent hours on the phone with someone they believe is a bank representative. These tips feel helpful, not suspicious. It sounds like someone doing them a favor so they don’t have to waste time explaining things to a teller.
Attackers are also using advanced bots that most fraud systems aren’t built to detect. Modern automation frameworks like Stagehand are designed to behave like real users. They simulate mouse movement, scrolling, and interaction delays. They can also load third-party scripts to generate clean browser fingerprints. Even the bots themselves have different signatures, which makes it harder to detect if you don’t know what to look for.
To top it off, fraudsters will also hide behind residential proxies, mobile emulators, or spoofed webcam feeds to make the sessions look normal. Unless you know what to look for, nothing stands out.
How to detect AI fraud attacks
While AI scams are designed to look legitimate, they aren’t invisible. The key is to shift your detection strategy away from static rules and surface-level content toward deeper signals around user intent via device intelligence and behavior biometrics.
Monitor the device environment
Look for signs of manipulated or simulated environments. This includes things like emulators, virtual machines, spoofed sensors, or unsupported device configurations, especially during onboarding, password resets, or payment flows. Strong device intelligence helps flag these setups early, before the attacker has a chance to blend in.
Inspect browser behavior for signs of automation
Tools like Stagehand and Puppeteer simulate mouse movement, scroll depth, and click timing. But they often leave behind clues. To detect them, you need to go beyond surface interactions and analyze how sessions behave under the hood.

For example, a session that scrolls too smoothly without adjusting velocity, or loads a full page but never interacts with non-essential elements, may be flagged. Timing anomalies, passive fingerprint mismatches, and interaction depth are also key indicators
Sardine has a machine learning model built to detect this new class of automation. It draws on a library of over 100 known bots and crawlers, and scores sessions in real-time to catch patterns that traditional systems overlook.
Use device and behavior signals to catch coached or scripted flows
When victims are being coached in real time, they often behave differently. They may pause between steps, retype or second-guess fields, or show signs of hesitation. Common signs of social engineering include:
- Remote access or screen-sharing apps running
- An active phone call during the session
- Screenshots being captured mid-flow
- Stressed or irregular typing patterns compared to typical users
Look at signals across sessions and channels
AI fraud doesn't follow a single path. A phishing email may lead to a login attempt, which leads to a phone call, a password reset, or a change in contact details. These attacks often unfold across multiple channels and stages in the customer journey, not just during payments.
To surface coordinated activity, you need to analyze behavior across sessions, channels, and user flows. That includes onboarding, login, profile updates, support interactions, and even abandoned actions. A session may look fine in isolation, but reveal risk when compared to a recent account opening or earlier behavior on another device.
Monitor for early signs of brand abuse
AI phishing attacks often begin with impersonation. The earlier you detect this activity, the better chance you have to take it down or warn customers before damage is done. Here are some tips for proactive brand monitoring:
- Set up alerts for newly registered domains that resemble your brand (using tools like DNSTwist or domain-monitoring services).
- Use screenshot search tools or phishing threat feeds to identify cloned sites or spoofed login pages.
- Conduct dark web monitoring to identify if phishing kits impersonating your brand are being sold or circulated.
- Encourage your fraud or trust teams to join Telegram fraud groups and underground forums (not to engage, but to listen). These spaces often reveal what tactics are trending, and which brands are being targeted next. Phishing kits are often shared long before the first customer is hit. Being plugged into those early signals gives you a head start.

Get involved in a data-sharing consortium
AI fraud often spans multiple platforms, making it difficult for any single organization to see the full picture. Joining a fraud consortium can help you spot attacks earlier by surfacing risk insights across banks, fintechs, merchants, and crypto platforms.
Consider joining Sonar, our member-led fraud data consortium. Sonar provides real-time insights on scams, first-party fraud, and counterparty risk, without requiring a complex integration. With a single API request, you can assess an entity's behavior and risk profile based on activity across the broader ecosystem.
Hire Sardine to prevent AI fraud attacks
Sardine was built for this new wave of fraud. We use advanced device intelligence and behavior biometrics to surface the signals AI-driven attacks try to hide, such as social engineering, spoofed environments, and bots that act just enough like real users.
Our machine learning models are trained to catch early signs of these attacks and adapt as tactics evolve. And because Sardine monitors the full customer journey (not just transactions), we can spot shifts in behavior or connections between sessions that others miss.
Just as important, everything runs in real time. When these scams scale, a few minutes can make all the difference.
With Sonar, our member-led fraud consortium, you also get visibility beyond your own walls so you can benefit from risk signals across banks, fintechs, merchants, and crypto platforms.
If AI fraud attacks are hitting your systems, please reach out. We’d love to help.