The Deepfake Reckoning: Why Crypto’s Next Security Battle Will Be Against Synthetic Humans

The Deepfake Reckoning: Why Crypto’s Next Security Battle Will Be Against Synthetic Humans

Opinion

Share this article

Crypto platforms must adopt proactive, multi-layered verification architectures that don’t stop at onboarding but continuously validate identity, intent, and transaction integrity throughout the user journey, argues Ilya Broven, chief growth officer at Sumsub.

By Ilya Brovin|Edited by Cheyenne Ligon

Dec 17, 2025, 2:00 p.m.

Robots (Unsplash/Sumaid pal Singh Bakshi/Modified by CoinDesk)

Generative AI has changed the economics of deception. What used to take professional tools and hours of editing can now be done with a few clicks. A realistic fake face, a cloned voice, or even a full video identity can be generated in minutes and used to pass verification systems that once seemed foolproof.

STORY CONTINUES BELOW

Don’t miss another story.Subscribe to the CoinDesk Headlines Newsletter today.See all newslettersBy signing up, you will receive emails about CoinDesk products and you agree to ourterms of useandprivacy policy.

Over the past year, I’ve seen evidence that deepfake-driven fraud is accelerating at a pace most organizations aren’t prepared for. Deepfake content on digital platforms grew 550% between 2019 and 2024, and is now considered one of the key global risks in today’s digital ecosystem. This isn’t just a technological shift — it’s a structural challenge to how we verify identity, authenticate intent, and maintain trust in digital finance.

Crypto adoption in the U.S. continues to surge, fueled by growing regulatory clarity, strong market performance, and increased institutional participation. The approval of spot Bitcoin ETFs and clearer compliance frameworks have helped legitimize digital assets for both retail and professional investors. As a result, more Americans are treating crypto as a mainstream investment class — but the pace of adoption still outstrips the public’s understanding of risk and security.

Many users still rely on outdated verification methods designed for an era when fraud meant a stolen password, not a synthetic person. As AI generation tools become faster and cheaper, the barrier to entry for fraud has fallen to almost zero, while many defenses haven’t evolved at the same speed.

Deepfakes are being used in everything from fake influencer livestreams that trick users into sending tokens to scammers to AI-generated video IDs that bypass verification checks. We’re seeing an increase in multi-modal attacks, where scammers combine deepfaked video, synthetic voices, and fabricated documents to build entire false identities that hold up under scrutiny.

As journalist and podcaster Dwarkesh Patel noted in his book, “The Scaling Era: An Oral History of AI, 2019-2025” now is the era of Scaling Fraud. The challenge isn’t just sophistication, it’s scale. When anyone can create a realistic fake with consumer-grade software, the old model of “spotting the fake” no longer works.

Most verification and authentication systems still depend on surface-level cues: eye blinks, head movements, and lighting patterns. But modern generative models replicate these micro-expressions with near-perfect fidelity — and verification attempts can now be automated with agents, making attacks faster, smarter, and harder to detect.

In other words, visual realism can no longer be the benchmark for truth. The next phase of protection must move beyond what’s visible and focus on behavioral and contextual signals that can’t be mimicked. Device patterns, typing rhythms, and micro-latency in responses are becoming the new fingerprints of authenticity. Eventually, this will extend into some form of physical authorization — from digital IDs to implanted identifiers, or biometric methods like iris or palm recognition.

There will be challenges, especially as we grow more comfortable authorizing autonomous systems to act on our behalf. Can these new signals be mimicked? Technically, yes — and that’s what makes this an ongoing arms race. As defenders develop new layers of behavioral security, attackers will inevitably learn to replicate them, forcing constant evolution on both sides.

As AI researchers, we have to assume that what we see and hear can be fabricated. Our task is to find the traces that fabrication can’t hide.

The next year will mark a turning point for regulation, as trust in the crypto sector remains fragile. With the GENIUS Act now law and other frameworks like the CLARITY Act still under discussion, the real work shifts to closing the gaps that regulation hasn’t yet addressed — from cross-border enforcement to defining what meaningful consumer protection looks like in decentralized systems. Policymakers are beginning to establish digital-asset rules that prioritize accountability and safety, and as additional frameworks take shape, the industry is inching toward a more transparent and resilient ecosystem.

But regulation alone won’t resolve the trust deficit. Crypto platforms must adopt proactive, multi-layered verification architectures that don’t stop at onboarding but continuously validate identity, intent, and transaction integrity throughout the user journey.

Trust will no longer hinge on what looks real but on what can be proven real. This marks a fundamental shift that redefines the infrastructure of finance.

Trust can’t be retrofitted; it has to be built in. Since most fraud happens after onboarding, the next phase depends on moving beyond static identity checks toward continuous, multi-layered prevention. Linking behavioral signals, cross-platform intelligence, and real-time anomaly detection will be key to restoring user confidence.

Crypto’s future won’t be defined by how many people use it, but by how many feel safe doing so. Growth now depends on trust, accountability, and protection in a digital economy where the line between real and synthetic keeps blurring.

At some point, our digital and physical identities will need even further convergence to protect ourselves from imitation.

Note: The views expressed in this column are those of the author and do not necessarily reflect those of CoinDesk, Inc. or its owners and affiliates.

More For You

By CoinDesk Research

Nov 14, 2025

GP Basic Image

What to know:

  • As of October 2025, GoPlus has generated $4.7M in total revenue across its product lines. The GoPlus App is the primary revenue driver, contributing $2.5M (approx. 53%), followed by the SafeToken Protocol at $1.7M.
  • GoPlus Intelligence’s Token Security API averaged 717 million monthly calls year-to-date in 2025 , with a peak of nearly 1 billion calls in February 2025. Total blockchain-level requests, including transaction simulations, averaged an additional 350 million per month.
  • Since its January 2025 launch , the $GPS token has registered over $5B in total spot volume and $10B in derivatives volume in 2025. Monthly spot volume peaked in March 2025 at over $1.1B , while derivatives volume peaked the same month at over $4B.

More For You

By Felix Xu|Edited by Cheyenne Ligon

17 hours ago

Robot girl (Gabriele Malaspina, Unsplash)

We can quibble over the exact timeline, but the quantum future is an approaching certainty, argues Arpa Network CEO Felix Xu. The time to act is now, while we still can.


Sign In 

Leave a Reply

Your email address will not be published. Required fields are marked *