![](https://cos-cdn.shuashuati.com/pipixue-wap/2020-1230-1107-56/ti_inject-812ce.png)
Deeper and Cheaper Machine Learning SUPERCHARGED HARDWARE WILL SPEED UP DEEP LEARNING L ast March, GOOGLE’S computers roundly beat the world-class Go champion Lee Sedol, marking a milestone in artificial intelligence. The winning computer program, created by researchers at Google DeepMind in London, used an artificial neural network that took advantage of what's known as deep learning. Unknown to the public at the time was that Google had an ace up its sleeve. 1. T he computers Google used to defeat Sedol contained special-purpose hardware —— Tensor Processing Unit. “Everybody is doing deep learning today,” says William Dally, who leads the Concurrent VLSI Architecture group at Stanford and is also chief scientist for Nvidia. A quite distinct job for deep-learning hardware, explains Dally, is “inference at the data center.” In building hardware for that, a company called Nervana Systems has been leading the charge , and an ASIC deep-learning accelerator will go into production in early to mid-2017. Dally adds that “the final leg of the tripod for deep learning is inference in embedded devices,” such as smartphones and cameras. For those applications, the key will be low-power ASICs. 2. Over the coming year, deep-learning software will increasingly find its way into applications for smartphones, where it is already used, for example, to detect malware or translate text in images. 3. Although there is plenty of incentive these days to design hardware to accelerate the operation of deep neural networks, there’s also a huge risk: chips designed to run yesterday’s neural nets will be outdated by the time they are manufactured.