Artificial intelligence: Menace, hype, or revolution?
Note: This opinion piece was first published on 20th October 2017 and adapted from HWM October 2017 issue.
Everyone has something to say about AI these days. “They’re going to take away all our jobs!” panicked voices cry out stridently. “One day, they’re also going to become smarter than people and kill us all!”
That sounds like the mad ramblings of a conspiracy theorist who watched one Terminator movie too many, until you realize that Tesla and SpaceX CEO Elon Musk sits firmly in that camp. Musk isn’t taking to the streets and yelling about a robot uprising, but he has referred to AI as humanity’s “biggest existential threat”.
He’s also said that efforts to develop smarter intelligences are akin to “summoning the demon”, so there’s little doubt that he views AI as a very real threat. Part of this concern is predicated on ideas set forth by English mathematician I.J. Good, who wrote about self-improving machines that would eventually become exponentially more intelligent than humans in an "intelligence explosion". You may recognize this idea as something that's also been referred to as the "singularity", and while Good never actually used the term, the main danger is clear in both cases – what if humanity loses control of this machine?
In his 1965 paper titled "Speculations Concerning the First Ultra-intelligent Machine", Good says:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the ﬁrst ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
That last line (emphasis mine) is crucial, and it's exactly what Musk and his contemporaries are so fearful of. For instance, Stephen Hawking hasn't been shy about pointing out that the impact of AI depends on who controls it.
More recently, Russian president Vladimir Putin also weighed in on the topic, saying pointedly that “Whoever becomes the leader in this sphere will become the ruler of the world.” Predictably, this prompted Musk to offer up another nugget of apocalyptic wisdom, and he took to Twitter warning that competition for AI superiority at a national level would be the most likely cause of a third world war.
Tools, not rival entities
But we may be getting ahead of ourselves here. When the news cycle is periodically dominated by headlines of super-smart programs like AlphaGo beating the world’s best players at Go, supposedly a game that is difficult for computers to grasp, it’s easy to get swept up in the narrative that AI is quickly surpassing us.
In some respects, that is true. AlphaGo is clearly a prodigious Go player. And even in a highly skilled profession like medicine, researchers are working on AI that can help with diagnoses of conditions like diabetic retinopathy, customize treatments, and make sense of patient data.
The problem with this line of reasoning is how amorphous the term AI has become. Self-driving cars? Image recognition in Google photos? Alexa and Siri? That’s all being lumped together under the same term.
But all these programs still serve very narrow purposes, and are a long way from what we’d consider general AI, which would be capable of performing things that humans do and possess abilities across multiple domains.
AI is trending heavily toward solving specific tasks, likely due to the difficulty of developing an artificial general intelligence, and the more immediate utility of training it to be super good at a narrow spectrum of things. At this point, AI are tools rather than budding intelligences. AI researcher Steve Omohundro has written about the potential for these advanced intelligences to develop "basic drives", but despite the scary, anthropomorphic implications of this, that's not quite the direction we're headed in yet.
We’re still a long way off from being able to have an actual conversation with Siri, and despite that slightly creepy incident where Facebook AI chatbots developed their own language that researchers could not understand, there are more pressing concerns than malicious AI.
The first thing that comes to mind is job obsolescence. To be clear, we’re talking about a scenario years into the future, but it’s still a more urgent issue than the hypothetical creation of super-smart AI.
A 2017 paper published by researchers at Oxford and Yale, which surveyed AI researchers on how soon they expected machines to be better at all tasks than humans, gives a 50 per cent chance of this happening in just 45 years. And in 120 years, all jobs may be automated, according to the paper.
Low-skilled jobs will likely be the first to go, so the most troubling prospect is how AI will potentially widen the already yawning gap between the haves and haves-not.
AI in your pocket
However, AI doesn’t just live in huge data centers or some faraway cloud. In fact, you're probably already interacting with AI on a daily basis as smartphones are set to play a big part in the diffusion of machine smarts into our lives.
The neural engine on the iPhone X is powered by a dual-core chip design, and can supposedly recognize faces without being confused by things like hats and beards (hiccups in stage demonstrations aside). But Apple is hardly the first to introduce the idea of neural engines on phones.
Over a year ago, Qualcomm launched an SDK for its Zeroth machine intelligence platform, which would allow developers to run deep learning programs locally on smartphones. The Zeroth name isn’t in use anymore, but the SDK is now freely available, so more developers can effectively utilize resources from the different components on Snapdragon chips, and thus better optimize their algorithms.
Qualcomm’s approach is still a software framework for existing hardware, but the company says it is working on baking something into the silicon itself.
That said, ARM is already ready with the hardware, with a new and more flexible CPU microarchitecture called Dynamiq. Chipmakers will be able to customize their silicon for machine learning at the hardware level, and even build AI accelerators into chips themselves.
Chinese smartphone giant Huawei is in on it as well. It unveiled the Kirin 970 chip at IFA 2017, which has since debuted in the Mate 10. The processor boasts an embedded neural processing unit, so the Mate 10 has dedicated hardware for AI-based tasks like image recognition and performance optimizations.
The integration of AI tools with hardware is also a central tenet of Google's new Pixel 2 and Pixel 2 XL, and you can be sure that we'll be seeing manufacturers push this angle hard in the coming years.
The point is that there are many faces to AI. They're alternately exciting, revolutionary, and sometimes slightly worrying. They can refer to super intelligent entities, or simply image recognition on things like Google Lens. Musk isn't alone in his concerns, and prominent minds such as Bill Gates and Steve Wozniak have also added their voices to the mix.
But these fears may just be overblown. I'm not saying that one shouldn't explore how to develop AI responsibly, but rather that the voices warning of an intelligence explosion may be a little too loud. AI poses plenty of risks – they could be exploited by governments, widen income inequality, enable new methods of warfare, and yes, maybe wipe out the human race. That said, by focusing on the latter, you run the risk of ignoring other, more immediate, issues.
Koh Wanzi / Former Senior Tech Writer
I care about three things in this world – Game of Thrones, pasta, and corgis. Oh, and I write about tech.