Professional future gazer Jason Bradbury confronts the existential reality of Artificial Intelligence.
Without doubt the question I’m asked in Q&A’s most is ‘What about Skynet?’ It might take different forms, but it is always driven by an innate understanding that our days as the alpha species are numbered. After a forty five minute exponential rollercoaster taking in my 1980 Sinclair ZX80 to a prediction of the first supercomputer capable of 10/18 floating point calculations per second. And why is that unfathomably large number important? Ten quintillion, as it is also expressed, happens to be the same amount of calculations per second our brains are capable of. That’s parity with the human brain within ten years. And following the exponential curve, super machine intelligence or Skynet from then on.
Super Intelligence or Artificial General Intelligence which more clearly describes the human-like cognitive and communication machine skill-set we’ll all be witness to within a decade or so, will solve a lot of problems. The cracking of the cancer riddle, limitless energy solutions, new models for physics and advances in materials science that will make even Graphene seem trivial in comparison. But that’s not what drives the question. What my audience are exhibiting is an instinct for the existential threat posed by the Frankenstein monster thousands of computer scientists around the world are, right now, attempting to engineer.
The Narrow A.I most of us are familiar with, the likes of Siri or the number-crunching that goes into our weather predictions might offer an awkward user experience in 2016 but even novice future gazers can’t fail to appreciate where it’s leading. To get to the nub of our fear of A.I, consider the example of a child who grows up to be murderer. Whichever side of the nature/nurture debate you favour, there is precedent for both in machine learning. Would you consider Google good parents of super intelligent A.I offspring? Or perhaps you think the North Korean government or Iranian cyber command are more likely to bring up a bad egg A.I, that goes on to trigger a nuclear conflict designed to get humans out of the way? All three sets of parents are actively developing A.I and we shall certainly find out which of their traits are carried through.
‘Couldn’t we just turn it off?’ Almost certainly not. Case in point, the Stuxnet virus, malware designed by the US and Israel that targeted Iranian nuclear centrifuges by hacking their Siemens control boxes. The point isn’t how effectively this malicious worm span up and wrecked centrifuges, but rather how quickly it was accidentally spread to almost every Windows machine on the planet within a matter of hours. The A.Is that will control our financial markets, critical industrial infrastructure and more will from inception be distributed across thousands of nodes and hundreds of geographical locations.
So, Skynet with no off switch and anti-social tendencies? Yes, it is a concrete possibility. Surely, the next question ought to be ‘So, what can we do about it?’ The likes of Elon Musk and Stephen Hawking certainly think it’s a question that requires serious consideration. Perhaps consideration in another blog post.
As well as talking tech on TV Jason Bradbury lectures in A.I at Lincoln University, has played Jeopardy against IBM’s Watson and has a formidable robot collection