Speaking at event in London, Professor Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.” It is not the first time the famous physicist warned humanity of an uncertain future as technology learns to think for itself and adapt to its environment, bring about our demise.
Earlier in the year Hawking said that success in creating AI ‘would be the biggest event in human history, [but] unfortunately, it might also be the last.’
He argues that developments in digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race which ‘pale against what the coming decades will bring.’
But Professor Hawking noted that other potential benefits of this technology could also be significant, with the potential to eradicate, war, disease and poverty.
‘Looking further ahead, there are no fundamental limits to what can be achieved. […]There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.’
Eric Schmidt, Google chief executive, argued that there is no need to fear AI, and it could even be the making of humanity.
‘These concerns are normal,’ he said onstage during the Financial Times Innovate America event in New York this week. ‘They’re also to some degree misguided.’
However, Elon Musk, the entrepreneur behind Space-X and Tesla, disagrees, warning of ‘something seriously dangerous happening’ as a result of machines with artificial intelligence. And this “something” might begin to “happen” in as few as five years.
Speaking at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium in October, Musk described artificial intelligence as our ‘biggest existential threat’, and has previously linked the development of thinking machines, to ‘summoning the demon’.
As the nuclear, aerospace, manufacturing and agricultural industries forge ahead developing autonomous systems, there are growing unease about the future.
How to prevent robot world domination? How to ensure AI can follow rules and make ethical decisions?
Researchers at the Universities of Sheffield, Liverpool and the West of England, Bristol have set up a new project to address concerns around these new technologies, with the aim of ensuring robots meet industrial standards and are developed responsibly.
The £1.4 million project will run until 2018. It aims to ensure robots meet industrial standards and are created responsibly, allaying fears that humans may not be able to control them.
Meanwhile, in the field of space exploration…
Intended for the performance of exploratory missions on the moon — alongside a four-wheeled robotic rover — the new designs were introduced by Toyota in a presentation titled “Realization of Moon Exploration Using Advanced Robots by 2020.”
What about little green men, the extraterrestrials? Will the first aliens we find be ROBOTS? Intelligent life may have turned to AI by the time we make first contact, claims Dr Susan Schneider from The University of Connecticut.
- Dr Schneider says the first intelligent aliens we find might not be biological. Advanced aliens might be machines.
- Humanity is already heading in this direction, Dr Schneider claims, and an advanced race would likely have already made this evolutionary leap. ‘The next evolutionary step could be we are post-biological,’ she says.
- ‘If you look at our own civilisation, people are becoming more immersed in computers, and we can already see signs of it in our own culture. […] if you need space travel, humans aren’t very durable. But with computers, you don’t have the same threat to worry about.’