The Groq Tensor Streaming Processor Architecture follows a growing trend of software controlling system functions, which has happened in autonomous cars, networking and other hardware.
The architecture hands over hardware controls from the chip to the compiler. The chip has integrated software control units at strategic points to optimize data movement and processing.
The units are organized in a way consistent with typical data flow found in machine learning models.
“Determinism enables this software-defined hardware approach. We’re not about abstracting away the details. We’re about controlling the hardware underneath,” said Dennis Abts, chief architect at Groq.
Abts shared the Groq Tensor Streaming Processor Architecture design at this week’s Hot Chips conference. The hardware-software co-design is not new, but the concept saw a revival at the conference, with Intel CEO Pat Gelsinger in a speech referencing the concept being central to the future of chips.
Groq is one of many companies designing chips specifically for AI. The AI chips have features to determine outcomes based on patterns discovered probabilities and associations, which is also the foundation for the software-enabled hardware controls on the architecture.
“What we did is try to avoid some of this waste, fraud and abuse that crops up at the system level,” Abts said.
Complexity at the system level often creeps up with tens of thousands of processing units such as CPUs, GPUs and smartNICs in heterogeneous computing environments with different performance, power and failure profiles.
“As a result, you end up with a lot of performance variation in response time, latency, variation, for example. And that latency variation ultimately slows down an internet scale application,” Abts said.
Groq reexamined the hardware-software interfaces on a chip for deterministic processing. The company had to make design choices, and uprooted conventional chip designs from the ground up.
“This enables … an ISA that empowers our software stack. We explicitly turn over control to the software, specifically the compiler so that it can reason about the correctness and schedule instructions on the hardware from a first principle standpoint,” Abts said.
At the top, the chip has a static dynamic interface, which gives the compiler a full view of a system at any given time. That substitutes runtime interfaces that can be found on conventional CPUs.
The static dynamic interface ensures the hardware is completely controllable by the compiler, while not abstracting away the details of the hardware. The compiler has a “miraculous view of what the hardware is doing at any given cycle,” Abts said.
Handing over hardware controls to software frees up the hardware to perform other functions. The architecture is unlike traditional systems, which embrace out-of-order execution, speculative execution, and other techniques to bring parallelism and memory concurrency, Abts said.
The system has 220MB of “scratchpad” memory and allocated “tensors” so compilers can determine the calculations, where they are going in a chip, and how they are moving in each cycle. The chip design makes memory concurrency available throughout a system.
Groq has also disaggregated functional elements typically found in a conventional CPU, such as integer and vector units, and relocated them into separate groups. That’s much like pooling together memory or storage into a single box, with the closeness providing performance advantages. That is particularly advantageous for AI applications.
The chip design is different from conventional CPUs, and “it allows us to execute in the same way that conventional CPU breaks down larger instructions into micro-operations. Similarly, we’re breaking up deep learning operations into their constituent smaller micro-operations and executing those as an ensemble which together accomplish a larger goal,” Abts said.
The chip design has matrix multiplication units, which Abts said was the “workhorse” unit. It contains storage units for 409,600 “weights,” which provides the parallelism required to make AI applications faster.
The chip’s building blocks also include SRAM memory, programmable vector units, 480GB/s networking units and data switches. These are all connected to 144 on-chip instruction control units, which controls the dispatch of tasks to associated functional units.
“This allows us to keep the hardware overhead of dispatch very low. Less than 3 percent of the area is used for instruction decode and dispatch,” Abts said.
Groq has also taken a software-defined approach to reducing network congestion.
“The compiler can literally schedule the network links just like it would schedule ALU (arithmetic logic unit) or matrix. This alleviates some of the more conventional [hardware-based] approaches,” Abts said, specifically referencing adaptive routing.
“What we’re trying to accomplish is predictable and repeatable performance that provides low latency and high throughput across the entire system,” Abts said.