Professor Stephen Hawking has pleaded with world leaders to keep technology under control before it destroys humanity.
In an interview with the Times, the physicist said humans need to find ways to identify threats posed by artificial intelligence before problems escalate.
“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” the scientist said.
“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”
Hawking added that the best solution would be “some form of world government” that could supervise the developing power of AI.
“But that might become a tyranny. All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges,” he added.
Hawking has been vocal about the potential dangers of artificial intelligence before.
“The real risk with AI isn’t malice but competence,” he wrote in a Reddit Q&A in 2015.
“A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
And he is not alone. American technology firm Tesla’s CEO Elon Musk agrees that AI could pose a threat to human existence.
“I think we should be very careful about artificial intelligence,” Musk said during the 2014 AeroAstro Centennial Symposium
“If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”