Why eMRAMs are the Future for Advanced-Node SoCs

Bhavana Chaurasia, Rahul Thukral

Jan 23, 2023 / 5 min read

Our intelligent, interconnected, data-driven world demands more computation and capacity. Consider the variety of smart applications we now have. Cars can transport passengers to their destinations using local and remote AI decision making. Robot vacuum cleaners keep our homes tidy. Smart watches can detect a fall and call emergency services. With high-volume computations comes greater demand for high memory capacity, along with an absolute necessity to reduce system-on-chip (SoC) power, especially for battery-operated devices.

As data gets generated by more sources, the data needs to be processed and accessed swiftly—especially for always-on applications. Embedded Flash (eFlash) technology, a traditional memory solution, is nearing its end, as scaling it below 28nm is highly expensive. In response, designers of IoT and edge-device SoCs, along with other AI-enabled chips, are seeking a low cost, area- and power-efficient alternative to support their growing appetite for memory.

As it turns out, the new memory solution ideal for low-power, advanced-node SoCs isn’t so new at all. Embedded Magneto-Resistive Random Access Memory (eMRAM) emerged about two decades ago but is now undergoing an uplift in utilization thanks to its high capacity, high density, and ability to scale to lower geometries. In this blog post, we’ll take a closer look at how IoT and edge devices are creating shifts away from traditional memory technologies, why eMRAM is taking off now, and how Synopsys is helping to ease the process of designing with eMRAM.

Smart Home System

How Is the Memory Landscape Changing?

While memory is ubiquitous in our smart everything world, the memory technology landscape is changing quickly, with power becoming a key criterion. High-performance computing, cloud, and AI applications need to conserve dynamic power, while mobile, IoT, and edge applications are concerned about leakage current. Moving to smaller process technologies typically provides power, performance, and area (PPA) benefits; however, at smaller nodes, dynamic and leakage power scale differently. As a result, traditional memory technologies that have long been reliable for many designs, but consume significant amounts of energy, are proving inadequate for advanced-node SoCs supporting space-constrained designs such as those in the IoT and edge spaces.

For years, eFlash has been a conventional and prominent source of high-density, on-chip non-volatile memory (NVM). However, eFlash is simply too taxing on the system power budget for small, battery-powered applications. What’s more, the cost of enabling Flash technology beyond 28nm is quite high, limiting the ability of design teams to move to advanced technology nodes.

The semiconductor industry has continued researching different NVM solutions, like spin-transfer torque MRAM (STT-MRAM), phase-change RAM (PCRAM), and resistive RAM (RRAM). One particular type—eMRAM—has emerged as an ideal fit for the demands of many advanced-node SoCs.

How eMRAM Meets the Need for Low-Power Memory

Unlike conventional embedded memories like SRAM and Flash, which store information via an electric charge, eMRAM stores data via its spin. The spintronic nature of eMRAM is comprised of ferromagnetic as well as non-magnetic materials which form a magnetic tunnel junction (MTJ). The MTJ continues to hold its polarization when its power is removed, retaining the data stored and, overall, consuming much less system-level power. Compared to options like SRAM, eMRAM offers smaller area, lower leakage, higher capacity, and better radiation immunity. Given this, a single die can boast more memory with eMRAM, or a design utilizing eMRAM can be smaller with the same amount of memory as if it had used SRAM. And against options like PCRAM and RRAM, eMRAM is less sensitive to high temperature, provides better production-level yields, and offers longer endurance (marked by the ability to retain data over multiple read/write cycles over many years). Major fabs already have 22nm FinFET-based eMRAM in production.

Unified eMRAM Solution | Synopsys

Figure 1. A unified eMRAM solution can lead to lower latency and interface power.

eMRAM has its roots in MRAM technology, which has been around for decades. As processors moved from 28nm down to 22nm and prevailing memory technologies could no longer scale to keep pace, this presented an inflection point where MRAM technology—in the form of eMRAM—was rediscovered. eMRAM offers clear advantages for space- and power-constrained applications like IoT and the edge. Over time, as its speeds improve, eMRAM could broaden its reach to become a universal memory resource. Automotive designs, for instance, rely on MCUs that need embedded memory, traditionally eFlash. At 22nm and below, eMRAM offers a reliable option at automotive temperature grades. Industrial and other high-performance embedded applications could also experience advantages moving to eMRAM.

Addressing eMRAM Design Challenges

While eMRAMs present attractive advantages, designers also should be aware of what they’ll need to address when designing with this type of memory. For one, magnetic immunity needs to be accounted for. For memory designers, this involves testing the MRAM and its immunity level, the units of which are measured in gauss or oersted, and informing their chip design customers of this spec. Any elements near the chip that can become magnetic—such as inductor coils—can impact eMRAM performance, so chip designers will need to design those elements with enough distance from the eMRAM. Chip packaging with a magnetic shield can also protect the eMRAM from end devices with a large magnetic field, such as refrigerators.

Read activity is particularly sensitive, so writing to the device may disturb the read activity. Error code correction (ECC) can help lower failure rates by addressing the process variation that leads to reliability issues.

Magnetic shields and ECC are two of many techniques that help address the challenges of designing with eMRAMs. For long-lasting endurance and reliability of on-chip implementations of eMRAM, built-in self-test (BIST), repair, and diagnostic solutions, along with a robust silicon qualification methodology, can go a long way. Time to market is another important consideration. For faster turnaround time of eMRAM designs, designers can turn to compiler IP that can quickly compile eMRAM hard macros.

Achieving Faster Turnaround Time of Reliable, Low-Power Memory Designs

As a longtime developer of memory solutions, Synopsys provides a variety of solutions to help accelerate the development of high-quality eMRAM, including:

  • Synopsys eMRAM Compiler IP, which provides a configurable memory IP solution, with options to optimize on instance size and different features like an ECC scheme. The IP is designed to deliver just-in-time compilation of eMRAM hard macros within a few minutes, reducing turnaround time and accelerating time to market.
  • Synopsys Self-Test and Repair (STAR) Memory System™, which provides a full suite of test, repair, and diagnostic capabilities for eMRAM, optimizing test time without sacrificing test coverage. Configurable memory BIST and repair algorithms mitigate MRAM defects.
  • Synopsys STAR ECC Compiler IP, which improves in-field reliability by enabling multi-bit detect and correct. The IP can also be used to maximize manufacturing yield due to the stochastic nature of eMRAM technology.
  • Synopsys Silicon Lifecycle Management Family, which provides insight into the silicon to allow tweaks to performance levels or margins for better operation.

Indeed, memory will continue to be integral to every electronic device or system we use. By delivering high capacity, low power, and process technology scaling, eMRAM is poised to take on even bigger roles in next-generation, high-performing embedded applications. Could eMRAM make your next product better? To learn more about Synopsys eMRAM IP, contact your Synopsys sales representative.

Continue Reading