by Larisa Redins, Activist Post:
In a recent experiment, Vice.com writer Joseph Cox used an AI-generated voice to bypass Lloyds Bank security and access his account.
To achieve this, Cox used a free service of ElevenLabs, an AI-voice generation company that supplies voices for newsletters, books and videos.
Cox recorded five minutes of speech and uploaded it to ElevenLabs. After making some adjustments, such as having the AI read a longer body of text for a more natural cadence, the generated audio outmaneuvered Lloyds security.
TRUTH LIVES on at https://sgtreport.tv/
“I couldn’t believe it had worked,” Cox wrote in his Vice article. “I had used an AI-powered replica of a voice to break into a bank account. After that, I accessed the account information, including balances and a list of recent transactions and transfers.”
Multiple United States and European banks use voice authentication to speed logins over the phone. While some banks claim that voice identification is comparable to a fingerprint, this experiment demonstrates that voice-based biometric security does not offer perfect protection.
ElevenLabs did not comment on the hack despite multiple requests, Cox says. However, in a previous statement, the firm’s co-founder, Mati Staniszewski, said new safeguards reduce misuse and support authorities in identifying those who break the law.
Preventing AI voice misuse
Technology to counteract this kind of attacks, at least in theory, is already on the market.
ID R&D has developed a multi-modal biometric authentication system that passively verifies multiple biometrics during a chat session. This technology is designed to guarantee the authenticity of conversations by ensuring that only genuine people and not bots are doing the talking.
Nuance Communications also is working to combat AI voice misuse.
Brett Beranek, the vice president and general manager of security and biometrics at Nuance, has said that AI products can detect fraud from people’s conversation during live chat sessions.
It looks into word choice, grammar accuracy, syntax conventions, and emojis and acronyms used. Afterward, the AI compares these elements with legitimate customers’ patterns plus those from known fraudsters.