industry news
Subscribe Now

Deci and Intel Collaborate to Accelerate Journey Towards More Scalable AI

With the power of Deci’s Automated Neural Architecture Construction (AutoNAC) technology, developers are better suited to build, optimize, and deploy more powerful deep learning models using Intel Chips

Tel Aviv, November 10, 2022—Deci, the deep learning company harnessing Artificial Intelligence (AI) to build AI, today announced a new strategic collaboration with Intel to accelerate the journey towards more scalable AI. By combining Deci’s proprietary Automated Neural Architecture Construction (AutoNAC) technology with Intel chip architectures, the two companies will further optimize deep learning inference, enabling developers everywhere to build, optimize, and deploy more accurate, fast and efficient models for the edge, datacenter, and cloud.

As the Deci-Intel collaboration continues, Deci recently joined the Intel Disruptor Program, which provides technical enablement and go-to-market activities for participants. Deci was also one of the first companies to join Intel Ignite, an accelerator program designed to support innovative startups in advancing new technologies in disruptive markets.

Deci is now working with Intel to demonstrate AutoNAC’s performance on 4th Gen Intel Scalable processors, codenamed Sapphire Rapids. Together, Deci and Intel are making significant steps towards enabling breakthrough deep learning inference on CPUs, a break from tradition as GPUs have generally been the default choice for AI tasks.

“As a result of our collaboration with Intel, we’ve seen exciting achievements in such a short period – deep learning at scale on CPUs is more feasible than ever before,” said Yonatan Geifman, CEO and Co-Founder of Deci. “We expect that our joint activities will only further propel AI accessibility, dramatically optimizing deep learning inference for any task in any environment.”

Deci and Intel first announced their broader strategic business and technology collaboration in 2021, following several groundbreaking submissions at MLPerf. In 2022, Deci announced its results for both its Computer Vision (CV) and Natural Language Processing (NLP) models that were submitted to the MLPerf v2.0 Datacenter Open division. On several Intel Architecture (CPUs), Deci’s AutoNAC generated models that delivered breakthrough accuracy and throughput performance- for their CV submission, Deci delivered +1.74% improvement in accuracy and 4x improvement in throughput, while for their NLP submission, Deci improved accuracy by +1.03% and throughput performance by 5x. This was a continuation of their MLPerf results in 2021 where on several Intel CPUs, Deci reduced the submitted models’ latency by a factor of up to 11.8x and increased throughput by up to 11x– all while preserving the model’s accuracy within 1%.

About Deci

Deci enables deep learning to live up to its true potential by using AI to build better AI. With the company’s deep learning development platform, AI developers can build, optimize, and deploy faster and more accurate models for any environment including cloud, edge, and mobile, allowing them to revolutionize industries with innovative products. The platform is powered by Deci’s proprietary automated Neural Architecture Construction technology (AutoNAC), which automatically generates and optimizes deep learning models’ architecture and allows teams to accelerate inference performance, enable new use cases on limited hardware, shorten development cycles and reduce computing costs. Founded by Yonatan Geifman, Jonathan Elial, and Professor Ran El-Yaniv, Deci’s team of deep learning engineers and scientists are dedicated to eliminating production-related bottlenecks across the AI lifecycle.

Leave a Reply

featured blogs
Dec 7, 2023
Building on the success of previous years, the 2024 edition of the DATE (Design, Automation and Test in Europe) conference will once again include the Young People Programme. The largest electronic design automation (EDA) conference in Europe, DATE will be held on 25-27 March...
Dec 7, 2023
Explore the different memory technologies at the heart of AI SoC memory architecture and learn about the advantages of SRAM, ReRAM, MRAM, and beyond.The post The Importance of Memory Architecture for AI SoCs appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Must be Thin to Fit: µModule Regulators
In this episode of Chalk Talk, Amelia Dalton and Younes Salami from Analog Devices explore the benefits and restrictions of Analog Devices µModule regulators. They examine how these µModule regulators can declutter PCB area and increase the system performance of your next design, and the variety of options that Analog Devices offers within their Ultrathin µModule® regulator product portfolio.
Dec 5, 2023