Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...
You are here : design-reuse-embedded.com  > Artificial Intelligence  > AI and Machine learning
Online Datasheet        Request More Info
All Silicon IP


C860 utilizes a 12-stage superscalar pipeline, with a standard memory management unit, and can run Linux and other operating systems. It also utilizes a 3-issue and 8-execution deep out-of-order execution architecture, with a single/double-precision floating point engine. It can be equipped with an AI acceleration engine. It is suitable for application fields requiring high server performance, such as intelligent monitoring, machine vision and edge servers.


  • Intelligent Vision;
  • Smart Home Appliances.

Block Diagram


  • Instruction set: T-Head ISA (32-bit/16-bit variable-length instruction set);
  • Multi-core: Isomorphic multi-core, with 1 to 4 optional cores;
  • Pipeline: 12-stage;
  • Microarchitecture: Tri-issue, deep out-of-order;
  • General register: 32 32-bit GPRs; 16 128-bit VGPRs;
  • Cache: 2-stage cache; I-cache: 32 KB/64 KB (size options); D-cache: 32 KB/64 KB (size options); L2 Cache: 128 KB-2 MB (size options);
  • Cache check: Optional ECC check or parity check;
  • Bus interface: 1 128-bit master interface; 1 128-bit slave interface;
  • Memory protection: On-chip memory management unit supports hardware backfilling;
  • Floating point engine: Supports single and double precision floating point operations;
  • AI vector calculation engine: Dual-line 128-bit operation width, supporting half-precision/single-precision/8-bit/16-bit/32-bit parallel computing;
  • ng half-precision/single-precision/8-bit/16-bit/32-bit parallel computing
  • Multi-core consistency: Multiple-core shared L2-cache, and supports cache data consistency;
  • Interrupt controller: Supports a multi-core shared interrupt controller;
  • Debugging: Supports multi-core collaborative debugging;
  • Performance monitoring: Supports a hardware performance monitoring unit;
  • AI acceleration engine: Provides dedicated acceleration instructions to accelerate various typical neural networks;
  • Hybrid branch processing: Hybrid branch processing technology including branch direction, branch address, function return address and indirect jump address prediction to improve the fetching efficiency;
  • Data prefetching: Multi-channel and multi-mode data prefetching technology greatly improves data access bandwidth.

Partner with us

Visit our new Partnership Portal for more information.

Submit your material

Submit hot news, product or article.

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2018 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted,
reposted, duplicated or otherwise used without the
express written permission of Design And Reuse.