Feature Articles

This computer can think like the human brain

By Koh Wanzi - 12 Oct 2015

This computer can think like the human brain

Taking Lessons from Nature

Artificial intelligence may one day surpass man, but to do so, they first have to learn to become more like him. (Image Source: IBM Research)

Futurists would have you believe that artificial intelligence could one day transcend our own in an event that you’ve probably heard referred to as the technological singularity. So are our biological intelligences condemned to eventually languish in the shadow of the machine? Not quite.

Developments in machine intelligence are actually subject to two almost paradoxical crosscurrents. While the machine could eventually surpass the human, the burgeoning field of deep learning and artificial neural networks is showing that in order for computers to become smarter, they could first take a few lessons from nature.

At the forefront of it all is a handful of GPU-accelerated technologies that use the power of GPUs’ massive parallel architectures to process multiple tasks simultaneously and more efficiently, and also run compute-intensive deep learning algorithms. NVIDIA’s own DIGITS ecosystem provides data scientists and researchers who want to train their own algorithms and neural networks with a way to do so easily, even without any technical knowledge pertaining to GPUs.

The NVIDIA DIGITS DevBox is powered by four GeForce GTX Titan X GPUs. (Image Source: NVIDIA)

Deep learning was also the focus of the Asia South leg of the NVIDIA GPU Technology Conference (GTC) 2015, where Marc Hamilton, VP, Solution Architecture, at NVIDIA spoke of various real-world applications for deep learning algorithms. One particularly exciting use would be in automated driving systems, where self-driving cars could respond to changing road conditions on-the-fly, instead of being programmed to respond according to a rigid set of parameters.

So if conditions deviate from the norm – perhaps when road markings are obscured by snow or when traffic lights are blocked by tree branches – these cars will still know how to respond correctly. The hilarious report about a particular Google autonomous car’s reaction to a cyclist doing a track stand at a traffic intersection also shows just how much self-driving cars could benefit from the ability to react to new and unexpected situations.

However, these neural networks need to be trained and fed huge amounts of data in order to become smarter. But because they learn from matching similar patterns to each other, instead of recognizing specific characteristics, they can learn to respond to new scenarios. Think of it this way. Confronted with a new picture of a cat, a pre-programmed machine would flounder, but a machine powered by deep learning algorithms would recognize common patterns from previous pictures of cats it had seen before, and be able to identify the picture.

GPUs are also far from the only way to run these neural network algorithms. Back in 2014, IBM unveiled its latest chip, a piece of silicone inspired by the human brain, as part of DARPA’s SyNAPSE program. IBM wanted to create a chip that could excel at things computers were traditionally bad at, but which humans could do effortlessly – pattern recognition and the processing of images, sound, and sensory data. So while GPUs are capable of handling brain-inspired deep learning algorithms, the SyNAPSE chip attempted to emulate the brain’s architecture, together with its dense network of neurons and synapses, from the hardware up.

Cognitive computing is a field of science that aims to teach machines how to work like a living brain. (Image Source: IBM Research)

This year, IBM showcased a new application for the chip, now dubbed TrueNorth. It had integrated 48 of these chips into a single system to mimic a 48-million neuron rodent brain, creating an exceedingly efficient way of executing neural networks. Because it emulates neural networks in much the same way as deep learning algorithms, the two map very effectively onto each other.

It may not look like much, but this is TrueNorth, IBM's brain-inspired computer. (Image Source: Berkeley Lab)

And then there are startups like Nervana, which offers deep learning on its custom hardware as a cloud service. By making deep learning capabilities more accessible, it hopes to spur the democratization of deep learning for an even wider variety of applications. It is not alone in this goal, and outfits like Ersatz Labs, MetaMind, and Skymind all want to enable a broader adoption of the technology.

Nervana CEO and co-founder Naveen Rao believes that medicine could be one of the fields that will see the most benefit from the improved ability of computers to recognize patterns and images. For all the advances in medical science, the interpretation of scans like X-Rays and MRIs are still left up to doctors themselves, which means that human error could very well find its way into crucial diagnoses. Rao thinks that deep learning could make it possible to apply decades of expertise to a computer, and help less experienced doctors minimize their own errors.

A more specific application would be in the case of diabetic retinopathy (DR), an eye disease that is a leading cause of vision impairment in diabetics. Detecting DR is a tedious and time-consuming process that requires doctors to closely examine digital photographs of patients’ retinas. Furthermore, the resources and expertise required for accurate diagnoses are often lacking in areas where diabetes is prevalent, which means a reliable method to automate the process is sorely needed.

Diabetic retinopathy is exceedingly time consuming to diagnose, which is why more efficient and automated methods are needed. (Image Source: Kaggle)

Kaggle, a platform for data prediction competitions, ended up hosting a contest for quicker and more accurate detection. The results were promising, as all top five entries turned out results that were more accurate than human doctors, the latter’s accuracy being determined as the rate of agreement among three clinicians.

To sum things up, before artificial intelligence can even surpass ours, it must first take a leaf from our book (or rather, our craniums). HAL 9000 may be a long way off, but fragments of its precursors are already driving advances in areas as varied as consumer technology and medicine.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.