industry news
Subscribe Now

d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI

New Jayhawk platform capitalizes on innovative energy efficient chiplet interconnects to improve performance and reduce data center energy consumption

SANTA CLARA, Calif. — Today, d-Matrix, a leader in high-efficiency AI-compute and inference processors, announced Jayhawk, the industry’s first Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW) based chiplet platform for energy efficient die-die connectivity over organic substrates. Building on the back of the Nighthawk chiplet platform launched in 2021, the 2nd generation Jayhawk silicon platform further builds the scale-out chiplet based inference compute platform. d-Matrix customers will be able to use the inference compute platforms to manage Generative AI applications and Large Language Model transformer applications with a 10-20X improvement in performance.

Large transformer models are creating new demands for AI inference at the same time that memory and energy requirements are hitting physical limits. d-Matrix provides one of the first Digital In-Memory Compute (DIMC) based inference compute platforms to come to market, transforming the economics of complex transformers and Generative AI with a scalable platform built to handle the immense data and power requirements of inference AI. Improving performance can make energy-hungry data centers more efficient while reducing latency for end users in AI applications.

“With the announcement of our 2nd generation chiplet platform, Jayhawk, and a track record of execution, we are establishing our leadership in the chiplet ecosystem,” said Sid Sheth, CEO of d-Matrix. “The d-Matrix team has made great progress towards building the world’s first in-memory computing platform with a chiplet-based architecture targeted for power hungry and latency sensitive demands of generative AI.”

d-Matrix’s novel compute platform uses an ingenious combination of an in-memory compute-based IC architecture, sophisticated tools that integrate with leading ANN models, and chiplets in a block grid formation to support scalability and efficiency for demanding ML workloads. By using a modular chiplet-based approach, data center customers can refresh compute platforms on a much faster cadence using a pre-validated chiplet architecture. To enable this, d-Matrix plans to build chiplets based on both BoW and UCIe based interconnects to enable a truly heterogeneous computing platform that can accommodate 3rd party chiplets.

“d-Matrix has moved quickly to seize the chiplet opportunity,  which should give them a first-mover advantage,” said Karl Freund, Founder and Principal Analyst at Cambrian-AI Research. “Anyone looking to add an AI accelerator to their SoC design would do well to investigate this new approach for efficient AI.”

The Jayhawk chiplet platform features:

  • 3mm, 15mm, 25 mm trace lengths on organic substrate

  • 16 Gbps/wire high bandwidth throughput

  • 6-nm TSMC process technology

  • <0.5 pJ/bit energy efficiency

Jayhawk is currently available for demos and evaluation. d-Matrix will be showcasing the Jayhawk platform at the Chiplet Summit Jan 24-26 in San Jose, CA

About d-Matrix

d-Matrix is building a new way of doing datacenter AI inferencing at scale using in-memory computing (IMC) techniques with chiplet level scale-out interconnects. Founded in 2019, d-Matrix has attacked the physics of memory-compute integration using innovative circuit techniques, ML tools, software and algorithms; solving the memory-compute integration problem, which is the final frontier in AI compute efficiency. Learn more at dmatrix.ai

Leave a Reply

featured blogs
Jan 27, 2023
Wow, it's already the last Friday in January, so time for one of my monthly update posts where I cover anything that doesn't justify its own full post or which is an update to something I wrote about earlier. Automotive Security I have written about automotive secur...
Jan 26, 2023
By Slava Zhuchenya Software migration can be a dreaded endeavor, especially for electronic design automation (EDA) tools that design companies… ...
Jan 24, 2023
We explain embedded magnetoresistive random access memory (eMRAM) and its low-power SoC design applications as a non-volatile memory alternative to SRAM & Flash. The post Why Embedded MRAMs Are the Future for Advanced-Node SoCs appeared first on From Silicon To Software...
Jan 19, 2023
Are you having problems adjusting your watch strap or swapping out your watch battery? If so, I am the bearer of glad tidings....

featured video

Synopsys 224G & 112G Ethernet PHY IP OIF Interop at ECOC 2022

Sponsored by Synopsys

This Featured Video shows four demonstrations of the Synopsys 224G and 112G Ethernet PHY IP long and medium reach performance, interoperating with third-party channels and SerDes.

Learn More

featured chalk talk

EiceDRIVER™ F3 Enhanced: Isolated Gate Driver with DESAT

Sponsored by Mouser Electronics and Infineon

When it comes to higher power applications, galvanically isolated gate drivers can be great solution for power modules and silicon carbide MOSFETS. In this episode of Chalk Talk, Amelia Dalton and Emanuel Eni from Infineon examine Infineon’s EiceDRIVER™ F3 Enhanced isolated gate driver family. They take a closer look at advantages of galvanic isolation and the key features and benefits that this gate driver family can bring to your next design.

Click here for more information about Infineon Technologies Eval-1ED3321MC12N Evaluation Board