By Alison Weiss
Smartphones and now smart speakers outfitted with digital voice assistants are hugely popular, but not every consumer is a fan. A 2017 Capgemini Digital Transformation Institute survey reveals that 56% of respondents do not trust voice assistants with the safety and security of their personal information, and 48% believe that voice assistants are intrusive and ask for too much personal data.
The question is, do people have something to worry about? Recent events indicate there might be some legitimate causes for concern. A 2017 court case focused on the legality of whether a user’s conversation with Alexa, Amazon’s smart speaker AI interface, is protected by the First Amendment regarding rights to privacy. In addition, in 2017 a security expert discovered how to turn an Amazon smart speaker into an eavesdropping device. However, despite these occurrences, sales of smart speakers are continuing to grow (projected to reach 43.6 million in the US in 2018), and some industry observers suggest that consumers appear willing to trade some privacy for convenience.
What is still unknown is how people will react as intelligent voice assistants become an integrated part of conversational commerce; consumers might be uncomfortable when they can no longer easily detect whether the voice of a customer service agent they are speaking to is a person or not. Taking it one step further, Mark Logan, senior vice president of innovation at Barkley, a Kansas City, Missouri–based advertising agency, says he’s looked at Lyrebird, speech technology that has the capacity to reproduce a person’s voice in a copy that is nearly impossible to distinguish from the original voice.
“Technologies like Lyrebird and several others in both voice and video have the capacity to produce false voice and video that will soon will be indistinguishable from the real thing,” he says. “It’s a little bit scary because these developments will have tremendous implications for fake news.”
Photography by Oleg Laptev, Unsplash