Google has built its own machine learning processor
Google has built its own machine learning processor
In late 2013, we reported Google had considered developing its own server processor. Now, the Internet giant has revealed its very own machine learning processor. Known as the Tensor Processing Unit (TPU), this custom-made ASIC (application-specific integrated circuit) that meant for machine learning, and it works hand-in-hand with TensorFlow, Google’s very own, 2nd gen, machine learning infrastructure. According to the company, the TPUs have been deployed to their data centers for almost a year.
In the context of Moore’s Law, the TPU’s performance per watt is about three generations, or about seven years, ahead of other machine learning peers. This is in part due to the special requirements of the TPU as its focus is on computation speed, not compute precision. This leeway allows the TPU to function with fewer transistors per operation; as a result, the TPU is able to “squeeze more operations per second into the silicon”. As a result, “more sophisticated and powerful machine learning models” can be developed and applied at a faster rate. Therefore, users are able to get more meaningful answers with shorter wait times.
The Google machine learning TPU fits onto an accompanying board; together, they can be installed into a HDD slot in a typical data server rack. So not only is the TPU optimized for deep learning, it has a relatively small footprint. According to Google, TPUs are already powering its services like RankBrain, Street View, and AlphaGo, the company’s AI system that recently beat Lee Se-dol, top Go chess player of the last decade. Google’s ultimate goal for its AI initiatives is to stay ahead of the competition and share the fruits of its labor with end consumers.
(Source: Google Cloud Platform via The Next Lab)