Back

Powering the future of AI


Writing in Forbes, venture capitalist Rob Toews asks if the growing ubiquity of Artificial Intelligence is responsible for a re-evaluation of chip architecture. Designing new processors specifically for AI is “one of the largest market opportunities in all of hardware today”.

CPUs, which process sequentially, not in parallel, lack the power required for deep learning, which.

entails the iterative execution of millions or billions of relatively simple multiplication and addition steps. Grounded in linear algebra, deep learning is fundamentally trial-and-error-based: parameters are tweaked, matrices are multiplied, and figures are summed over and over again across the neural network as the model gradually optimizes itself.

This repetitive, computationally intensive workflow has a few important implications for hardware architecture. Parallelization—the ability for a processor to carry out many calculations at the same time, rather than one by one—becomes critical. Relatedly, because deep learning involves the continuous transformation of huge volumes of data, locating the chip’s memory and computational core as close together as possible enables massive speed and efficiency gains by reducing data movement.

This is where the GPU, which today powers the vast majority of all machine learning processes, comes into its own. But, since the GPU that NVIDIA invented in the 1990s for gaming has been adapted, and adopted, for machine learning, how well-suited is it to coping with what’s to come in future? 

New entrants into the AI processor market

The comedy-sized Cerebras processor – 60 times larger than a typical microprocessor, the first chip in history to house 1.2 trillion transistors and with 18 GB memory on-chip – is a new entrant to the processing market.

Packing all that computing power onto a single silicon substrate offers tantalizing benefits: dramatically more efficient data movement, memory co-located with processing, massive parallelization. But the engineering challenge is, to understate it, ludicrous. For decades, building a wafer-scale chip has been something of a holy grail in the semiconductor industry, dreamt about but never before achieved.

Another pretender is Groq. This Bay Area startup’s processor, designed by eight of the teen people who designed Google’s TPU, has turned conventional wisdom i=on its head.

Groq is building a chip with a batch size of one, meaning that it processes data samples one at a time. This architecture enables virtually instantaneous inference (critical for time-sensitive applications like autonomous vehicles) while, according to the company, requiring no sacrifice in performance. Groq’s chip is largely software-defined, making it uniquely flexible and future-proof.

The company recently announced that its chip achieved speeds of one quadrillion operations per second. If true, this would make it the fastest single-die chip in history.

Boston-based Lightmatter AI microprocessor is powered by beams of light. The founders have raised $33M from GV, Spark Capital and Matrix Partners, persuading investors that the unique properties of light will enable its chip to outperform existing solutions by a factor of ten.

Other pretenders to the processing throne include Horizon Robotics and Cambricon Technologies, both Chinese, both of whom have raised more money at higher valuations than any of their competitors.

Google and its Tensor Processing Unit (TPU) have already been mentioned, Amazon has its Inferentia AI chipTeslaFacebook and Alibaba are all running in-house processor design projects. SambaNova Systems in Palo Alto are an established presence. Then there’s Graphcore, Wave Computing, Blaize, Mythic and Kneron.

It’s either all going to get very crowded in the next gen processor waiting room or there will be some spectacular casualties. And then there are Intel and NVIDIA. What do they have up their sleeve? That will perhaps be a question that’s asked on one of the early editions of the Informed Sauce podcast. In the meantime, as Rob Toews concludes in his article:

The race is on to develop the hardware that will power the upcoming era of AI. More innovation is happening in the semiconductor industry today than at any time since Silicon Valley’s earliest days. Untold billions of dollars are in play.

This next generation of chips will shape the contours and trajectory of the field of artificial intelligence in the years ahead. In the words of Yann LeCun: “Hardware capabilities….motivate and limit the types of ideas that AI researchers will imagine and will allow themselves to pursue. The tools at our disposal fashion our thoughts more than we care to admit.”



RELATED INSIGHTS