Like it or not, the tech world can’t get enough of the much-hyped realm of AI. At Google’s I/O conference on May 8, that fact was apparent as the brand introduced its latest contribution to the field: Duplex, a years-in-the-making AI platform that exhibits an unprecedented ability to mimic human speech.
In general, people have complicated feelings about artificial intelligence. Some praise AI’s seemingly limitless potential, while others are quick to point out the ethical implications of equipping computers with “human” intelligence. Hollywood and the sci-fi genre have spent decades telling worst-case-scenario stories of futuristic robots becoming self-aware and outsmarting their creators with violent results.
Learn about Clearlink's approach to data scienceLearn More
Hollywood hyperbole aside, Google’s introduction of Duplex has made these once-theoretical ethical debates startlingly real. The technology, an AI-based assistant similar to Amazon’s Alexa, sounds distinctly human—correctly using filler words like “um” and “ah” and adapting to the pace of authentic human speech. Google hopes the technology will facilitate better communication between small businesses and customers by managing reservations and appointments and updating irregular business hours during holidays.
What Do People Think?
On the heels of such a groundbreaking announcement, we were curious about initial reactions to Duplex. The technology isn’t available for public use yet, so these opinions are strictly in regards to the limited information Google has chosen to release about it, including several sound clips of real Duplex conversations.
To find what people are thinking, we surveyed 1,000 US residents ages 14 and up in the hopes of understanding how they feel about this complicated technology.
Here’s how 1,000 survey respondents feel about Google Duplex and its implications for ethical communications in the future.
Murky Thoughts on Ethics
While Google has expressed interest in transparency, the company has yet to decide exactly how it will make sure someone knows they’re speaking with Duplex during a conversation. In the company’s initial press release, principal engineer Yaniv Leviathan and VP of engineering Yossi Matias state that “It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context.”
It’s a nice sentiment, but keep in mind it took until late 2016 for the Institute of Electrical and Electronics Engineers to create a national set of AI ethics guidelines—and those guidelines come with little backing. Without strict regulations governing the reach of AI, the responsibility and choice to be transparent with this technology rests solely with Google, a for-profit organization.
Participants’ initial lack of concern could have something to do with the existing prevalence of automated phone systems. Most people have interacted with a computer when contacting a large organization for customer service. The presence of such technology hasn’t caused ethical dilemmas before because these systems have always sounded automated, not human. Duplex is different. Its sophisticated form of speech was designed to imitate human conversation, and without an explicit disclaimer early on in a call, users could spend an entire conversation unaware they are speaking to Duplex.
Once we are confronted with that reality, will general consensus on the ethics of such technology change?
Will We Lose Our Manners?
In addition to ethics, we also asked whether or not participants would speak differently during a phone call if they knew they were talking to a computer and not a human. A slim majority of 58% agreed they would.
Exactly how they would change the way they speak is unclear, but this raises interesting questions about the social ramifications of Duplex and other technologies like it. If we become accustomed to treating robots who sound just like us negatively because of a lack of social consequences, will that change how we interact on phones—or with each other—in general?
Without much data, we may not know until changes have already taken place. We’re only beginning to see evidence that increased screen time limits children’s ability to recognize emotion and social cues on faces, so similar impacts on conversation aren’t out of the question.
While 85% of respondents said they would want to know if they are talking to AI, only 42% of participants believe they would not change the way they converse based on whether they’re interacting with a human or robot. Like so much else, however, we don’t know how that number will change as this technology becomes part of our daily lives.
People Will Speak for Themselves, for Now
Without any personal experience with Duplex, nearly 43% of participants told us they would allow it to speak on their behalf in the future. The 57% who aren’t comfortable with Duplex speaking for them are in the majority for now, but if this technology becomes part of everyday routine, that number could fall.
The fact that this technology will be marketed toward small businesses as a way to free up valuable resources and overhead has an ironic ring to it. In our age of online convenience, community relationships are vital to the survival of many small businesses. Duplex hopes not only to eliminate relationship-building interactions, like making a dinner reservation with a new restaurant owner, calling a shop associate to check hours, or making conversation with a receptionist while setting up an appointment—it aims to make us forget those things have been eliminated at all.
Of course, Duplex is only one more sign of the larger societal shift toward computer dependency that has been taking place for decades. But the technology raises important questions about ethics, transparency, societal values, and even the stability of some careers in the future.
If you’re interested in learning more about data science and its impacts on your life and business, visit our Data Science Solutions page for more information.