Feature Articles

The amazing things Google is doing with AI

By James Lu - 1 Dec 2017

The amazing things Google is doing with AI

Google is one of, if not the biggest, contributor to AI development today. The company first publicly discussed AI when it announced the Google Brain project back in 2011, and it was just a year later that Google successfully built its first neural network, running on 16,000 computers, with the job of identifying cats in over a million YouTube videos.

Today, you can find AI in almost every Google product, from the Smart Reply feature in Gmail, to auto-complete in Google Search, to the next suggested video in YouTube, to the Portrait mode feature on your Pixel 2 XL smartphone, to Google Assistant in your Google Home. In fact, there are now over 18,000 Google employees trained in machine learning and AI development at Google, and it has quickly become the key focus for the company, with CEO Sundar Pichai stating at Google I/O earlier this year his belief that we are “shifting from a mobile first world to an AI first world.”

But Google has also been using AI in other ways you may not even know about. Here are some of the most interesting.

 

More accurate translations

You may not know this, but in November 2016, Google Translate switched from its old sentence-based translation system to a new one based on neural networks.

Google’s Neural Machine Translation system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. Additionally, the network learns over time, meaning that every time it translates something, it gets better and better.

 

Become the undisputed best Go player in the world in 40 days

Everyone's familiar with Google's world champion-beating AlphaGo AI, but much more impressive is Google's new AlphaGo Zero AI. Previous versions of AlphaGo learned the game through analyzing thousands of professional games. AlphaGo Zero skipped this step entirely, and was just taught the basic rules of the game, and improved only by playing matches against itself. After just three days of training, it had surpassed the version of AlphaGo that beat Go legend Lee Se-dol in 2016, and by 21 days, it had reached the level of AlphaGo Master, the version of Go that beat the number one ranked player in the world, Ke Jie, earlier this year. By day 40, AlphaGo Zero had surpassed all previous versions of AlphaGo and was able to beat AlphaGo Master 100 games to 0. 

AlphaGo Zero is able to do this by using a novel form of reinforcement learning. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games. This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero. This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn from the strongest player in the world: AlphaGo itself.

 

Diagnose diseases better than human doctors

According to Google, Diabetic retinopathy is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. Unfortunately, doctors capable of detecting the disease are not available in many parts of the world, such as India, where diabetes is prevalent.

However, Google may have a solution. One of the most common ways to detect diabetic eye disease is to have a specialist examine pictures of the back of the eye and rate them for disease presence and severity. Severity is determined by the type of lesions present, which are indicative of bleeding and fluid leakage in the eye. Working closely with doctors both in India and the US, Google created a development data set of 128,000 images which were each evaluated by 3-7 ophthalmologists from a panel of 54 ophthalmologists. This dataset was used to train an AI neural network to detect diabetic retinopathy.

The research results were very promising, with the AI's performance on-par with that of trained ophthalmologists. In fact, Google's AI had an F-score (combined sensitivity and specificity metric, with max=1) of 0.95, which was actually slightly better than the median F-score of the human ophthalmologists (measured at 0.91).

 

Save cute marine mammals

Google's TensorFlow neural network software is at the heart of a new project to help save the endangered (and extremely cute) sea cow. As it turns out, despite being quite large, sea cows are really hard to keep track of. Researchers are using drones to take aerial photos of the ocean, but detecting the marine animals is quite challenging... at least for humans.

Here's a picture Google shared with us of one of the shots taken by a drone:

Can you spot the sea cow?

Here it is:

Using Google's open-source TensorFlow software, researcher Amanda Hodgson of Murdoch University and her team built a detector to find sea cows in the photos. An early version of their detector could find 80 percent of the sea cows in the aerial drone shots, and they expect the AI to improve performance over time. Eventually, the AI could be used not only for sea cows, but for other endangered species such as humpback whales and dolphins as well. 

 

Create an AI that makes other AIs

AutoML is an AI Google has created to help it create other AIs. In Google's words, "What if we could automate the process of machine learning?" 

Currently, most AIs are built around neural networks that simulate the way the human brain works in order to learn. Neural networks can be trained to recognize patterns in information, such as speech, text, and visual images. But to train them requires large data sets and human input to make sure the AI is learning correctly.

However, Google has revealed that not only has AutoML successfully created its own neural network AI without any human input, but it's also vastly more powerful and efficient than the top performing human-designed systems.

This slide shows a number of neural networks built for image recognition - one of the most common uses for AI today. AutoML's image recognition AI managed an incredible 82 percent accuracy, higher than any human-designed AI.

This achievement marks the next big step for AI, in which development is automated as the software becomes too complex for humans to understand.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.