Machine vs. Machine: Inside the AI Fraud War
AI has given financial fraudsters many new weapons, sparking an arms race between criminals and defenders. We spoke to three fintech leaders to learn what’s really at stake in the AI fraud war.
AI has given fraudsters new tools to automate account takeovers, deepfakes, and synthetic identities. In your view, which of these emerging fraud vectors is most concerning today, and why?
Laurent Charpentier, CEO, Yooz: Synthetic identity fraud is a significant threat because it hides in plain sight. Fraudsters stitch real and fake data together so well that legacy defenses miss it until money is gone. When fighting accounts payable fraud, AI changes the game by detecting subtle anomalies at the document level, like altered invoices or forged fields, before payments move. It’s like having an antivirus engine for finance workflows.
Ruston Miles, Founder & Chief Cybersecurity Advisor, Bluefin Payment Systems: Synthetic identities remain one of the most concerning vectors because they combine authentic and fabricated data to create customers who can bypass traditional controls and operate undetected for long periods. Unlike account takeovers, which leave a digital trail, synthetic IDs often look like legitimate consumers until the fraudster chooses to “bust out.” What’s especially worrying is the convergence of synthetic IDs with deepfakes.
Roman Eloshvili, Founder and CEO of XData Group: Synthetic identities worry me the most. They are a lot harder to spot than deepfakes, and with the power of AI, a criminal can create hundreds of “legitimate” profiles with relative ease. Any one of those can potentially be good enough to pass KYC, and once they get into the system, they can operate for months and longer before being noticed.
How do you think about staying a step ahead of fraudsters in what feels like an accelerating arms race between attackers and defenders?
Roman Eloshvili, Founder and CEO of XData Group: The honest truth is, you can’t stay a step ahead forever—it’s an endless competition where the lead keeps changing all the time. And if you want to tip the scales in your favor, the key element to account for is agility: making systems that can adapt as fast as attackers.
That means continuous retraining, real-time data sharing, and accepting that defenses must evolve weekly, not yearly. Stop aiming for perfect defense—no such thing exists anyway—and, instead, focus on improving the speed of detection and response. Catching fraud early on after it happens is often much more realistic than trying to prevent 100% of it.
Laurent Charpentier, CEO, Yooz: You win by making fraud prevention strategic, not reactive. AI provides scale and speed by flagging anomalies in real time, while humans bring judgment and context. Layer in audit-ready traceability and you strengthen defenses even further. At Yooz, we see AI as both a shield and a sensor, protecting payments while giving CFOs visibility into risks before they escalate.
Is AI really helping to reduce the volume of flags on legitimate transactions?
Roman Eloshvili, Founder and CEO of XData Group: Yes and no. AI is definitely improving how fast and accurately banks can run checks, especially by learning from past false positives. But even if the models become better, there is still the other side to this equation - human decision-making.
Most compliance teams tend to lean toward the “better safe than sorry” approach, so they prefer systems that over-flag. As a result, false positives still present a problem that also creates trouble for legitimate customers as they get caught up in it. The tech may be improving, but the customer experience is still clunky in parts.
Ruston Miles, Founder & Chief Cybersecurity Advisor, Bluefin Payment Systems: Yes, AI is improving the precision of fraud detection models, which helps reduce false positives and customer friction. But this only works when AI is paired with strong, layered security and ongoing human oversight. Too often, organizations treat AI outputs as unquestionable, which creates risk when models drift or attackers deliberately try to manipulate them. AI should be seen not as a “harmless helper” but as a powerful tool that requires constant monitoring, transparency, and feedback loops.
So, what are the major AI blindspots in payments, and what should companies be thinking of now that they may not be?
Ruston Miles, Founder & Chief Cybersecurity Advisor, Bluefin Payment Systems: One major blind spot is treating AI as a shortcut rather than a critical system. Many organizations are embedding AI into fraud and compliance workflows without applying the same rigor and controls they would to other sensitive systems. For example, AI is increasingly being given access to raw payments data, but companies aren’t always considering how to audit or explain its decisions. This creates new attack surfaces and accountability gaps.
Laurent Charpentier, CEO, Yooz: AI delivers the most value when it is fed clean, consistent data and paired with human expertise. The risk isn’t that AI will fail outright, but that companies will treat it as a stand-alone solution. Fraudsters adapt quickly, so models must be retrained and checked against real-world context. The opportunity lies in building a system where AI handles the heavy lifting, while people bring oversight and judgment.
What other advice do you have for leaders in payments?
Laurent Charpentier, CEO, Yooz: The real story is not just fraud, it is waste. AI doesn’t just lighten the load, it elevates the work. By eliminating repetitive tasks and surfacing real-time insights, finance teams get the space to focus on strategy, from optimizing vendor terms to improving cash flow. That is the promise of Lean Financial Operations: fighting fraud and inefficiency while turning finance into a growth engine.
Ruston Miles, Founder & Chief Cybersecurity Advisor, Bluefin Payment Systems: Winning the AI arms race isn’t just about better algorithms, it’s about building resilience into the transaction itself. Security-by-design measures like tokenization, encryption, and proxy architectures ensure that even if fraudsters penetrate the perimeter, the data they steal is worthless. That shifts the economics of fraud.
Roman Eloshvili, Founder and CEO of XData Group: Another big blindspot is fragmentation. Fraud doesn’t stop at one bank or platform - they move stolen funds from place to place. But AI models usually sit in siloes, which means there is no shared intelligence to help the chase along. Until companies adjust their approach in this, criminals will be able to exploit the cracks between systems.
Roman Eloshvili, Founder and CEO of XData Group
Ruston Miles, Chief Strategy & Development Officer, Bluefin Payment Systems
Laurent Charpentier, CEO, Yooz






