editor's blog
Subscribe Now

Cadence Supports NVMe

Last year, a new standard was overlaid on PCI Express (PCIe) to reset the way non-volatile memory (NVM) is accessed. To date, solid-state disk (SSD) access methodologies had been modeled around the existing mechanisms and limitations surrounding “spinning media” – hard drives. As solid-state memories start to proliferate in roles that used to be dominated by hard drives, those limitations and mechanisms change.

The new standard that accomplishes this is called NVM Express (NVMe), and it uses the basics of PCIe to handle moving the data around, since that’s often how these memory subsystems are connected to the CPU subsystem. But the higher layer adapts PCIe to a specific NVM context.

The standard sets up submission and completion queues – up to 64K of them, each of which can hold up to 64K 64-byte commands. Features include:

  • End-to-end data protection
  • No uncacheable memory-mapped I/O register reads in either the submission or completion path
  • No more than one memory-mapped I/O write to submit a command
  • Queue priority and arbitration
  • Ability to do a 4K-byte read in a single 64-byte command
  • A small basic command set (Read, Write, Write Uncorrectable, Flush, Compare, Dataset Mgmt)
  • Support for interrupt aggregation (including message-signaled interrupts)
  • Multiple namespaces – a device can be decoupled from a “volume”
  • Support for I/O virtualization (like SR-IOV)
  • Error reporting and management
  • Ability to support low-power modes

There are register sets for:

  • Declaring what a particular controller supports
  • Device failure status
  • Configuring an admin queue for managing I/O queues
  • Doorbell registers for submission and completion queues

Cadence just announced their NVMe IP offering, which is based on their existing PCIe IP; the NVMe layer is new, along with the firmware needed to support it. They’ve optimized the underlying PCIe implementation for this particular context, making the overall implementation smaller. They’ve merged the APIs up to the top level so that there is one interface regardless of which layer might be accessed by any given operation. They’ve also coordinated their DMAs for smoother operation and less contention.

They’ve hardware-accelerated the basic commands; the command set itself can be extended through the firmware.

The PCIe PHY is hard IP; the rest is RTL and firmware. They’ve got a tool to configure the IP via an XML description that describes the configuration to their implementation tools.

You can find out more about Cadence’s NVMe IP in their announcement.

Leave a Reply

featured blogs
Sep 30, 2022
When I wrote my book 'Bebop to the Boolean Boogie,' it was certainly not my intention to lead 6-year-old boys astray....
Sep 30, 2022
Wow, September has flown by. It's already the last Friday of the month, the last day of the month in fact, and so time for a monthly update. Kaufman Award The 2022 Kaufman Award honors Giovanni (Nanni) De Micheli of École Polytechnique Fédérale de Lausanne...
Sep 29, 2022
We explain how silicon photonics uses CMOS manufacturing to create photonic integrated circuits (PICs), solid state LiDAR sensors, integrated lasers, and more. The post What You Need to Know About Silicon Photonics appeared first on From Silicon To Software....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

56 Gbps PAM4 Performance in FPGA Applications

Sponsored by Mouser Electronics and Samtec

If you are working on an FPGA design, the choice of a connector solution can be a crucial element in your system design. Your FPGA connector solution needs to support the highest of speeds, small form factors, and emerging architectures. In this episode of Chalk Talk, Amelia Dalton joins Matthew Burns to chat about you can get 56 Gbps PAM4 performance in your next FPGA application. We take a closer look at Samtec’s AcceleRate® HD High-Density Arrays, the details of Samtec’s Flyover Technology, and why Samtec’s complete portfolio of high-performance interconnects are a perfect fit for 56 Gbps PAM4 FPGA Applications.

Click here for more information about Samtec AcceleRate® Slim Body Direct Attach Cable Assembly