The recent release of a new song by The Beatles has captivated fans worldwide, made possible through the use of artificial intelligence (AI) to enhance an old recording while improving its audio quality.
However, as excitement surrounds this musical feat, concerns arise over the potential misuse of AI to create deepfake voices and images.
While the current prevalence of deepfakes and the tools used to create them is limited, their potential for use in fraudulent activities is a cause for worry, especially considering the ongoing advancement of this technology.
Capabilities of Voice Deepfakes
Open AI recently showcased an Audio API model capable of generating human speech and voice input text, representing the closest approximation to real human speech thus far.
While the current iteration of the Open AI model cannot produce deepfake voices, it serves as an indicator of the rapid development of voice generation technologies.
Although high-quality deepfake voices indistinguishable from real human speech are not yet commonplace, recent months have seen the release of more tools for generating human voices, making them more accessible to users.
As these tools become easier to work with, the near future may bring models that combine simplicity of use with high-quality results.
Instances of Fraud and Protection Measures
Although instances of fraud using artificial intelligence are rare, examples of successful cases are already known.
Venture capitalist Tim Draper recently warned his Twitter followers about scammers using his voice in fraud schemes, a clear sign of the increasing sophistication of AI-driven fraud.
Protecting oneself from potential threats posed by deepfake voices is currently challenging, as the technology to detect and prevent them is still in its early stages.
However, some precautions can be taken. Individuals are advised to listen carefully to phone calls, paying attention to the quality of the recording and the naturalness of the voice.
Additionally, asking unexpected questions can help reveal artificial voices, as can installing reliable security solutions to avoid suspicious websites, payments, and malware downloads.
Dmitry Anikin, Senior Data Scientist at Kaspersky, advises against overstating the threat of deepfake voices, noting that current technology is unlikely to create voices indistinguishable from real humans.
Nonetheless, individuals should remain vigilant and prepared for the possibility of advanced deepfake fraud becoming a new reality in the near future.