|
Overview
The bulk of computations in Large Language Models (LLMs) is in fully-connected layers that can be efficiently implemented as matrix multiplication. The Tensor Unit provides hardware specifically tailored to matrix multiplication workloads, resulting in a huge performance boost for AI without a big power consumption.
Please sign in to view full IP description :
Tech Specs
Part Number | RISC-V Tensor Unit |
Short Description | RISC-V Tensor Unit |
Provider |