“A reasonably good clone can be created with under a minute of audio and some are claiming that even a few seconds may be enough.” Mom wary about answering calls for fear voice will be cloned for future virtual kidnapping.
Unless I know who you are, I’m not answering your call. 90% of the time it’s a robot and the other 10% can leave a voicemail.
Isn’t a voicemail worse for detecting deepfakes because it doesn’t require it to dynamically listen and respond?
I’ll go one farther - unless it’s my doc, my wife, or my boss, I’m neither answering the call nor listening to the voicemail. That’s what easily skimmable voicemail transcription is for…
I don’t love the privacy implications of transcribed voicemail, ofc, but it’s better for my own privacy/threat model than answering the phone to robots, scammers, and etc. It’s also a hell of a lot better for my mental health, vs listening to them.
Real kidnappers will not be happy about this as deepfake becomes more prevalent and calls for ransom gets ignored more and more. Do they have union that can go on strike to raise awareness about this?
“Improving” is not a word I would use in this case.
This is why code words are important.
As awful as it sounds this needs to be setup between family members. Agree on a phrase or code word to check and make sure they are who they say they are. This is already common when it comes to alarm system monitoring companies, got to make sure the intruder isn’t the one answering the phone and telling them false alarm.
Techbros be like “but what if it can be used to resurrect dead actors?”.
Not even necessarily dead actors. They used AI to bring young Luke Skywalker back in The Book of Boba Fett. And it was not great, but it was serviceable. Now give it 10 years.