Human-like AI is dangerous for society


The voice on the other end of the phone sounded just a little too human.

In May, Google shocked the world with a demo of Duplex, its AI robocall assistant for accomplishing real-world tasks. The system can do things that you, as a busy person, might have little time or patience for, like booking a hair appointment or a restaurant reservation. But with its authentic-sounding “hmms” and “uhs,” the system raised some serious concerns, because the humans who answered the phone calls did not seem to realize they were talking to a piece of software. And indeed, this should worry us. Convincing human-like AI could be deployed for dubious reasons with disastrous consequences.

As more and more people come in contact with autonomous systems like Duplex, the danger is not that these systems will suddenly wake up and take over the world, despite the hysterical portrayals in the media and pop culture. Instead, the real danger is that humans will become but a passive data point in the designing of those systems, to disastrous ends.

Artificial intelligence is meant to be a tool for humans, to make our lives easier and find solutions to everyday problems. It is not meant to replace us. And yet, we design it to replicate human-ness with eerie fidelity. We don’t do this with other tools — hammers look like hammers, not people — so why do we do this with AI?

The answer is simple: because it makes great marketing.

When machines accommodate and gesture toward the nuances of our own behavior, we are much more willing to integrate them into our lives. Things that look and sound like us trigger our admirable human capacity for empathy. In the case of Duplex, the closer a voice sounds to human, the more reluctant the receiver of a robocall might be to hang up. But the human-ness of artificial intelligence could easily mask a dubious attempt to sell you something. Indeed, it could become all too easy to commoditize our trust. For example, we might be prone to read friendly intent into a bank chatbot that makes warm and witty banter, even if its purpose is to push students toward taking out unnecessary loans.

There are other concerning examples of AI being anthropomorphized and used as a marketing ploy. Last October, Saudi Arabia made headlines by “granting citizenship” to a talking robot named Sophia. This was a marketing stunt meant to signal the country’s focus on technological innovation. But if we look more closely, this move should be considered especially cruel in a country that only allowed real human women to drive last year, and where women still require a male guardian to make financial and legal decisions. A robot, it seems, can breezily be granted more rights than half of the population of that country, all for a short-term spot in the news cycle.

Perhaps this seems like an overreaction. But I assure you, it is not. Talk of AI and personhood at the level …read more

Source:: The Week – Science

      

(Visited 7 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *