Artificial Intelligence: Elon Musk wants Silicon Valley to proceed with caution

Some of the scariest monsters in the science fiction bestiary have been sentient machines that find it expedient to do away with pesky humans, from HAL to Ultron to the Terminators. Now that “artificial intelligence” is the most ubiquitous buzzword in the tech world, is it time to start worrying? 

 

Above: Tesla CEO Elon Musk has expressed concern about Artificial Intelligence (Image: CleanTechnica)

So-called AI is steadily expanding its reach from web search algorithms to claim a role in choosing everything from what movies to watch to whom to marry - you can hardly buy a watch that doesn’t claim to be powered by AI these days. Is there a real danger that Cortana or Alexa will evolve into Skynet?

Some techies dislike the term “artificial intelligence,” believing that it anthropomorphizes the technology and causes people to imagine doomsday scenarios. However, that’s what everyone is calling it, and whatever it is, some scientists and IT experts believe that we might want to think twice about handing it the keys, so to speak.

No one has been more vocal about the potential threat than Elon Musk. Speaking at MIT in 2014, he speculated that AI could be humanity’s “biggest existential threat.” Of course, he has never called for an end to research into AI (who would listen, anyway?), but he has implied that there should be some sort of regulatory oversight.

“With artificial intelligence, we are summoning the demon,” said Musk. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s sure he can control the demon? Doesn’t work out.” This quip has taken on a life of its own in the tech community - some AI engineers now facetiously refer to their work as “summoning.”

 

Above: Could Aritificial Intelligence progress too quickly without proper guardrails? (Image: Wonderslab)

A recent article in Vanity Fair presents a lengthy look at what Musk and other tech visionaries have to say about the topic. Several luminaries agree with Musk that caution is needed, notably Stephen Hawking, Bill Gates, Henry Kissinger and Sam Altman, with whom Musk founded OpenAI, a nonprofit company that aims to develop guidelines for the safe use of artificial intelligence.

Demis Hassabis, a co-founder of the London laboratory DeepMind, seems to fit the profile of the mad AI scientists Musk is concerned about (in fact, he once designed a video game called Evil Genius). “I think human extinction will probably occur, and technology will likely play a part in this,” said Shane Legg, one of Hassabis’s partners.

Before Google acquired DeepMind (along with most of the other interesting AI startups out there), Musk was an investor, mainly because he wanted to keep an eye on what the company was up to. “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”

The lords of Silicon Valley are steadily figuring out how to cede more and more human activities to algorithms and apps, and they seem quite confident that this will make our lives easier, healthier, longer and greener. However, Vanity Fair’s Maureen Dowd writes that “there’s a creepy feeling underneath it all, a sense that we’re the mice in their experiments.”

 

Above: Silicon Valley is racing to advance Aritificial Intelligence (Image: Wonderslab)

Many of these tech princes are friends of Elon Musk, including Google founders Larry Page and Sergey Brin. “I’ve had many conversations with Larry about AI and robotics - many, many,” Musk told Ms Dowd. “And some of them have gotten quite heated. You know, I think it’s not just Larry, but there are many futurists who feel a certain inevitability or fatalism about robots, where we’d have some sort of peripheral role.”

One of the fatalists is Ray Kurzweil, who has predicted that we are only a couple of decades away from the “Singularity” - a rapturous event in which humans and computers will merge into one godlike being. However, Kurzweil is not blind to the potential hazards.

“I’m the one who articulated the dangers,” Kurzweil said. “The promise and peril are deeply intertwined. Fire kept us warm and cooked our food and also burned down our houses. There are strategies to control the peril, as there have been with biotechnology guidelines.”

Musk’s most public disagreement about AI was with Facebook founder Mark Zuckerberg, who has called Musk a “naysayer,” and said that his views were “pretty irresponsible.” Musk riposted that Zuckerberg’s “understanding of the subject is limited.”

 

Above: Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss (with moderator Max Tegmark) the future of AI (Youtube: Future of Life Institute)

“Some people fear-monger about how AI is a huge danger, but that seems far-fetched to me,” says Zuckerberg. “If we slow down progress in deference to unfounded concerns, we stand in the way of real gains,” he told Wired. “We didn’t rush to put rules in place about how airplanes should work before we figured out how they’d fly in the first place.”

Ironically, that sounds very similar to comments Musk has made about vehicle autonomy. Self-driving cars are expected to save lives, so deploying them as quickly as possible is a moral imperative. “If, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle,” Musk told journalists in 2016, “you’re killing people.”

What’s needed is for both boosters and doubters to engage in an open discussion of all the benefits and risks - which is of course what Musk and Altman are trying to promote with OpenAI. We also need to keep in mind that we humans often fear the wrong things - the real threat to civilization may not come from killer robots. “An agent that had full control of the internet could have far more effect on the world than an agent that had full control of a sophisticated robot,” says Altman. “Our lives are already so dependent on the internet that an agent that had no body whatsoever but could use the internet really well would be far more powerful.”

Even if the kind of overarching machine intelligence that the “fear-mongers” imagine never comes into being, more mundane technological forces could end up making humans superfluous. In his book Homo Deus, Yuval Noah Harari predicts that we will gradually turn over more and more tasks to machines, until one day there simply isn’t anything left for humans, at least low-skilled ones, to do. At that point, society will have to decide whether to pay to keep the idle masses entertained and/or sedated, to warehouse them in prisons, or...

===

Written by: Charles Morris; Source: Vanity Fair