Google’s Machine Learning Chip is better and far away from CPUs, GPUs.
The goal is to improve cost-performance over GPUs.At Google I/O 2016,the tech gaint introduced Tensor Processing Unit.Google Engineers announced that they running tensor processing unit in their datacenter since 2015.it gives better magnitude performance.
Tensor processing units (or TPUs) are application-specific integrated circuits (ASICs) developed intentiannly for machine learning. Compared to graphics processing units , they are designed explicitly for a higher volume of reduced precision computationwith higher IOPS per watt, and lack hardware for rasterisation/texture mapping. The chip has been specifically designed for Google’s TensorFlow framework, however Google still uses CPUs and GPUs for other machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets.
Google has stated that its proprietary tensor processing units were used in the AlphaGo versus Lee Sedol series of man-machine Go games. Google has also used TPUs for Google Street View text processing, and was able to find all the text in the Street View database in less than five days. In Google Photos, an individual TPU can process over 100 million photos a day. It is also used in RankBrain which Google uses to provide search results. The tensor processing unit was announced in 2016 at Google I/O, although the company stated that the TPU had been used inside their datacenter for over a year prior.
TPU allows to achieve significantly very fast speed and better way to save energy than general-purpose Graphics Processing Unit, said Johnson. “Energy efficiency is particularly important in a large-scale datacenter scenario, where improving energy efficiency can significantly reduce cost when running at scale.”