Digital Gold for Predators: Tenable Expert Warns of AI-Powered “Dark Age” for Romance Scams Ahead of Valentine’s Day

by Henrylito Tacio
0 comments
Silhouette of a couple embracing at sunset symbolizing online romance scams and Valentine’s Day fraud risks

AI romance scams are rapidly evolving as criminals leverage frontier and open-source artificial intelligence to scale predatory operations within forced-labor compounds. Experts warn that 2026 may mark the beginning of a new “dark age” for online romance fraud.

New research and firsthand accounts reveal that scammers are now leveraging “frontier” and open-source AI to scale predatory operations within specialised forced-labour compounds.

Romance scams have entered a “dark age,” evolving from disorganised individual actors into a multi-billion-dollar criminal enterprise. According to the FTC, investment scams, the primary “endgame” for romance fraud, resulted in $5.7 billion in losses in 2024, a figure experts believe is a conservative estimate.

“2026 marks our entry into a dark age of romance scams,” says Satnam Narang, Senior Staff Research Engineer at Tenable. “The availability of powerful frontier AI models has provided digital gold for scammers. For the price of a cup of coffee, predators can now leverage these tools to generate linguistically perfect, emotionally resonant messages designed to ensnare victims across the globe.”

The Industrialisation of Deception: 4 Key Trends

Narang identifies four critical pillars currently driving this new era of fraud:

  • The AI “Frontier”: Scammers now use LLMs to eliminate traditional “red flags” like broken grammar and inconsistent narratives. By automating the “grooming” phase, they can maintain dozens of highly persuasive, persona-driven conversations simultaneously.
  • The “AI Room”: Sophisticated operations now utilise dedicated “AI Rooms” where deepfake technology enables real-time, face-swapped video calls. This allows a scammer to “prove” their identity visually, effectively dismantling the old advice to “just hop on a video call” to verify a match.
  • The Investment Pivot: Narang emphasises that “romance” is now simply the hook for “pig butchering” schemes. Victims are systematically “fattened” with trust and staged financial success on fraudulent platforms before being “slaughtered” for their life savings.
  • The Open-Source Threat: While frontier models have guardrails, free open-source models like DeepSeek and Qwen allow scammers to operate without ethical restrictions. These models now reach near-parity with paid services, providing a powerful, unrestricted toolkit for malicious use.
READ ALSO  Mekong Farm App Connects Smallholder Farmers to Markets and Agroecology Experts

Discussing the chilling human cost behind the technology, Satnam adds: “These scams are the engine of a multi-billion-dollar industry often built on the backs of trafficked individuals. Inside these compounds, victims are forced to work ‘sales floors’ governed by strict quotas. They even ring bells and gongs to celebrate when a victim’s life savings are stolen. While the technology is new, the psychological manipulation is as old as time, it just happens at a scale we’ve never seen before.”

Consumers are urged not to be swayed by screenshots of earnings or claims of insider expertise. If a match brings up investments, whether aggressively or ‘coyly’, it is a scam. If the conversation turns to money, the solution is simple: cut contact, unmatch, and report.

Notes to editor:

  • Technical Mechanics of the “AI Room”: Scam compounds utilise specialised “AI Rooms” to bypass visual verification. These setups use “virtual camera” software to intercept video feeds on platforms like WhatsApp or FaceTime. Real-time face-swapping software (such as DeepFaceLive) maps the scammer’s facial movements onto a high-resolution “target” persona. Technical artifacts or “glitches” in the AI are often masked by the scammer claiming a poor internet connection or using low-light environments.
  • The Role of Open-Source Models (DeepSeek & Qwen): While “Frontier” models (e.g., ChatGPT, Gemini) have built-in safety filters to refuse requests related to fraud, open-source models like DeepSeek and Qwen can be run locally on private servers. This allows scammers to remove all “guardrails,” enabling the generation of unlimited, unrestricted predatory content without fear of account suspension or detection by the AI provider.
  • Statistical Context: The $5.7 billion figure cited for 2024 refers specifically to investment fraud, which is the secondary phase of a romance scam (Pig Butchering). Total losses are estimated to be significantly higher as many victims do not report losses due to the perceived social stigma associated with romance-based deception.
  • Satnam Narang is a Senior Staff Research Engineer at Tenable with over a decade of experience in identifying and exposing social engineering trends. He is a leading expert on the intersection of AI technology and modern cybercrime.
READ ALSO  Mekong Farm App Connects Smallholder Farmers to Markets and Agroecology Experts