The emergence of deepfake scams in video conferencing is setting a new standard in cybercrime, challenging security professionals around the globe. Deepfake scams have targeted Singapore’s leaders in 2023. Asia-Pacific witnessed a 1,530% rise in deep fake crimes between 2022 and 2023. The EU and the US are reportedly working on regulating AI technology. Deepfake is a concerning issue for people because not only do scams continue to exist but they increase creativity.
Deepfake Hazards in Video Conferencing
The threat of deepfakes was starkly illustrated in December 2023 when videos featuring Singapore’s Prime Minister and Deputy Prime Minister were used to promote fraudulent cryptocurrency schemes. Advances in AI have enabled cybercriminals to create videos that accurately replicate people’s faces, voices, and speech patterns, compelling victims to act on fabricated information. This incident serves as a severe warning regarding the misuse of deepfake technology for malicious purposes.
Scammers rely on deepfakes for various crimes, from impersonation and extortion to fraud. Public trust in virtual communications has become a target, heightening concerns among security professionals and law enforcement agencies tasked with countering these threats. The alarming 1,530% rise in deepfake-related crimes in the Asia-Pacific region between 2022 and 2023 underscores the necessity for immediate action.
Deepfake technology means anyone can take your face, your voice and steal your identity for whatever they want.
They don’t need your consent, and can generate entire videos from a single picture.
96% of deepfakes are non-consensual and pornographic – those that aren’t, are… pic.twitter.com/ulHkYmgsAA
— ControlAI (@ai_ctrl) December 19, 2023
Global Regulatory Efforts and Challenges
Regulating deepfake technology presents significant challenges. While the European Union leads in AI standardization, the United States is drafting legislation, and the United Nations is navigating the complexities of a cybercrime convention. Nonetheless, without a globally unified approach, countries like China, South Korea, and Australia are implementing independent measures, leading to an uneven regulatory landscape.
Tech companies are developing tools to detect deepfakes, though effectiveness remains inconsistent due to varied commitment levels across the private sector. However, some regions have reported successful mitigation through improved staff awareness and robust authentication technologies, highlighting ways to combat the growing deepfake threat.
Scammers are now using AI to create “deep fake” audio and video links, making it sound like celebrities, elected officials, or even friends and family are calling. Review our tips on how to avoid these robocall and robotext scams. https://t.co/mbOilqQYKn
— The FCC (@FCC) March 5, 2024
Deepfake Phishing: The Hidden Threat
Deepfake phishing, integrating social engineering with AI-generated deceptive media, has emerged as a formidable cybercrime tactic. This method saw a shocking 3,000% rise in incidents during 2023 alone. Cybercriminals often target corporate leaders, using synthetic images, videos, and audio to fabricate urgent scenarios requiring immediate action, thus exploiting vulnerabilities in organizational security.
The task of detection becomes increasingly difficult as AI sophistication grows, creating almost indistinguishable replicas of sound and appearance from the actual individuals. Moreover, scammers often manipulate public figures like Elon Musk in social media ads and news articles to entice victims into fraudulent investments. Awareness and alertness are becoming vital components for individuals and companies facing this digital menace.