AI Is Driving A Surge In Sextortion Scams On Dating Apps

Image by Phạm Trần Hoàn Thịnh, from Unsplash

AI Is Driving A Surge In Sextortion Scams On Dating Apps

Reading time: 2 min

Artificial intelligence is making it easier for scammers to run sextortion schemes on dating apps, raising urgent concerns for teens, students, and law enforcement officials.

In a rush? Here are the quick facts:

  • Sextortion scams are rising sharply, especially among teens and students.
  • Scammers pose as romantic interests using bots and fake profiles.
  • Nearly 8,000 UK sextortion-related blackmail cases were logged in 2023

The practice of sextortion has existed for some time, but AI technology enables criminals to create more believable fake identities. This enables them to speed up victim contact, and produce artificial sexual content through deepfake technology, as reported by Vice.

The FBI defines sextortion as a method where criminals trick minors into sharing explicit photos before demanding additional images or money as payment to prevent the release of these pictures.

Criminals are now using AI-generated profiles to create fake romantic interests, often using deepfake photos and chatbots. The FBI warns that victims usually mistake the scammers for peers who offer romantic connections or gifts, but are instead being manipulated.

Student blackmail scams have increased significantly, according to the National Crime Agency. BBC data reports that the number of blackmail cases involving sextortion increased to 8,000 in 2023, while the total was only 23 in 2014.

BBC reports that Jim Winters, head of economic crime at Nationwide, advised young people not to stay silent: “Blackmail is one of the hardest things to face and it’s happening more often. It’s not easy but if something doesn’t feel right, speak up-”

Scammers now employ deepfake technology to superimpose victims’ facial images onto pornographic content, even though victims never shared explicit material. This new blackmail tactic eliminates the need for victims to share content before blackmail begins.

BBC suggests that a scam can be identified through repeated phrases in messages, flawless profile pictures, and unnatural dialogue patterns. Experts from the BPS suggest using reverse searches to verify suspicious images. However these methods become less effective when dealing with AI-generated content.

Experts emphasize the need for open dialogue, education, and updated prevention strategies, as limited research exists on how AI supports these crimes. These efforts are essential to protect the most vulnerable from digital threats.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback