Experts debate the issue
NIAGARA UNIVERSITY, N.Y. – Humanity has been captivated by artificial intelligence for at least a century. One need only look to our preoccupation with robots in literature, film, TV and video games to see humans grappling with this advancing technology.
However, the real experts do not agree on the nature of artificial intelligence. The central discussion revolves around whether sentient robots could pose a threat to mankind, given that this technology is no longer reserved to abstract science fiction.
This past summer, tech industry titans Elon Musk, CEO of Tesla and SpaceX, and Mark Zuckerberg, founder of Facebook, exchanged public barbs in a debate on the risks of artificial intelligence.
It began in July at a U.S. National Governors Association meeting when Musk, after years of similar statements, called for the regulation of A.I. because he believed it posed an existential threat to humanity.
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react,” Musk told the room, according to The Atlantic.
Shortly after, Zuckerberg chimed in with his comments on a Facebook Live broadcast. When asked about Musk’s stance he said: “I have pretty strong opinions on this. I am optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios – I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”
On Twitter Musk responded, saying, “I’ve talked to Mark about this. His understanding of the subject is limited.”
Musk is also worried about the way Silicon Valley has embraced artificial intelligence, occasionally butting heads on the topic with A.I. creator and founder of London’s DeepMind, Demis Hassabis. Indeed, the field is rapidly evolving and A.I. is being utilized by companies like Facebook for things such as targeted advertising. And yet, the A.I. capabilities in our dystopian fictions are still a figment of the future.
Musk has since cofounded OpenAI, a non profit company working for safer artificial intelligence.
Renowned scientists have also argued passionately on either side. Stephen Hawking’s position has sparked concern about the advent of A.I. He once said, “The development of full artificial intelligence could spell the end of the human race.”
Others, like astrophysicist Neil DeGrasse Tyson, dismiss the idea that A.I. poses a serious threat.
“Seems to me, as long as we don’t program emotions into robots, there’s no reason to fear them taking over the world,” wrote Tyson on Twitter.
In his book “Superintelligence,” philosopher Nick Bostrom argues that A.I. doesn’t need to be evil to spiral out of our control; all it would require is competence, and for its needs and desires to conflict with our own.
As the debate surrounding A.I. captures the attention of scientists and Silicon Valley moguls, it is worth discussing as a species what we want our A.I. to do, think and value. The future of humanity might just depend on it.