A new AI-driven fraud tactic is gaining ground: the use of real people's faces and voices to create highly convincing fake digital identities.
According to the report, scam networks recruit individuals through job ads and social platforms, offering money in exchange for video recordings, selfies and voice samples. Without fully understanding the consequences, these people may become the basis for AI-generated avatars later used in fraudulent activity.
Using this material, cybercriminals build fake profiles capable of interacting with targets through messages, voice notes and even real-time video calls. These identities are then used in romance scams, impersonation schemes and financial fraud, dramatically increasing the credibility of the deception.
Unlike traditional scams, where visual mistakes or awkward language could raise suspicion, these new techniques allow for personalized and visually realistic interactions, making detection far more difficult.
The trend highlights how generative AI is not only increasing the scale of fraud, but also raising its sophistication.