Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...

ARM shows Neoverse V2, plans V3 for 2023

www.eenewseurope.com, Sept. 14, 2022 – 

ARM has disclosed more details of its Neoverse V2 high performance processor core, codenamed Demeter, while planning the next version for 2023.

One of the first customers for the V2 core is Nvidia for the 72 core Grace CPU, which Nvidia will discuss next week.

The architecture of the V2 core has been optimised for specific applications in the data centre, most notably the BERT machine learning framework. This involved tuning the flow of the BF16 instruction to boost performance as well as doubling the instruction cache (icache) for the core to 2Mbits. It has also increased the vector performance of the core with four lanes of 128bit wide interconnect for the scalable vector extensions (SVE2).

"We have added icache coherency, workload specific optimisation eg BERT, larger cache and 48bit physical addressing to the V line for cloud workloads," said Dermot O'Driscoll, vice president of product solutions. "We have been tuning the microarchitecture against specific workloads and we are seeing modelling data that looks really good," said Brian Jeff, senior director of product management at ARM.

ARM points to V1 designs by Amazon for the Graviton3 being used in the data centre as well as the Ampere Max and Ultra Max and Alibaba's 128 core Yitian 710. A series of V2 designs are expected over the next year.

The next generation core is codenamed Poseidon and unless there is a dramatic change in the naming system, this will be the Neoverse V3. ARM is already talking about the Neoverse N3 core, where it has 20 partners working on the core and this will include the CXL3.0 standard for memory interconnect.

"V2 is available to our customers and we will provide more details on the next core when appropriate and will talk about solutions when they get close to deployment," said Chris Bergey, senior vice president and general manager of the Infrastructure Line of Business.

This is being driven by the need for customisation for workloads in the data centre, combining CPU cores with AI accelerator cores and intelligent interconnect, either as a single chip or as chiplets.

Here the interconnect is key, and the V2 and V3 will use the CMN-700 interconnect, based on ARM's AMBA Coherent Hub Interface (CHI). This will work with the CXL memory interconnect standards and with the UCIe chiplet protocol where ARM is a founding member alongside Intel and AMD.

click here to


Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2022 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.