Ethereum co-founder Vitalik Buterin discussed Web3 security, highlighting its growing importance in a world where deepfakes are increasingly prevalent.
On February 9, Buterin cited a recent report about the company losing $25 million. This happened when a finance employee was fooled by a convincing deepfake video call.
Web3 Security Highlighted by Deepfake Threats
Deepfakes, fake audio or video generated by AI, are becoming increasingly prevalent. Moreover, he said, it’s not safe to authenticate people just by seeing or hearing them.
“The fact remains that in 2024, a person’s audio or video stream will no longer be a secure way to authenticate oneself,” Buterin said.
They explain that encryption methods such as using a private key to sign a message are not sufficient and defeat the purpose of having multiple signers verify identity. Still, asking personalized ‘security questions’ based on shared experiences is an effective way to authenticate someone’s identity, he added.
Good questions probe the “micro” details that are unique, difficult to guess, and that people remember.
“People often stop engaging with security practices if they become dull and boring, so it’s healthy to make security questions fun,” Buterin suggested.
Security questions should be combined with other techniques, he said. This may include pre-agreed code words, multi-channel verification of information, protection against man-in-the-middle attacks, and delays or restrictions on irreversible operations.
Read more: 15 most common cryptocurrency scams to watch out for
Person-to-person security questions, like bank security questions, are different from business-to-person and must be tailored to the people involved.
Buterin concluded that no technology is perfect. However, contextual layering techniques can provide effective Web3 security even in a world where audio and video can be spoofed.
“In the post-deepfake world, we must adapt our strategies to the new reality of what is now easy to fake and what is still difficult to fake. But as long as you do that, it’s still possible to stay secure.” emphasized.
On February 9, it was reported that deepfake voices, images and other manipulated online content have already had a negative impact on this year’s US elections. The White House said it is exploring ways to verify all communications and prevent various forms of generative AI forgery, manipulation, and abuse.
Last month, the World Economic Forum (WEF) said AI-generated misinformation and deepfakes were the world’s biggest near-term threats. Also in January, MicroStrategy founder Michael Saylor warned about deepfakes that try to scam users out of their bitcoins.
disclaimer
In compliance with Trust Project guidelines, BeInCrypto is committed to unbiased and transparent reporting. These news articles aim to provide accurate and timely information. However, before making any decisions based on this content, readers are encouraged to check the facts and consult with experts. Our Terms of Use, Privacy Policy and Disclaimer have been updated.