Subscribe Now


Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality.

 
 

Cadence Blog – Latest Posts

Webinar: Self-Propulsion CFD Simulations: Get Them Right and Fast
Cadence helps the maritime industry effectively achieve ship contract speed while respecting the new environmental regulations implemented in 2023. To accomplish this, self-propulsion simulations are crucial and must provide accurate information for propeller thrust, torque and rotation rate, power, and the vessel's dynamic position relative to others. Based on many years of experience working closely with our customers, Cadence offers many dedicated features to achieve highly accurate simulations with minimum turnaround time. In this webinar, we explore some of the main attributes: The benefits of using the actuator disc versus a real propeller Complex CFD simulations for self-propulsion in various sea conditions Fully automated set-up and analysis Through a live demonstration, we will show you how to set up the simulation in the dedicated wizard within the Fine Marine interface. Webinar Details Date: Thursday, February 23, 2023 Time: 8am PST | 10am CST | 5pm CET Don't Delay - Register Today This essential webinar for everyone doing marine CFD is certain to fill-up so register today .
Feb 6, 2023
Cadence Unified AI/ML Solution Helps Renesas Accelerate its Verification Curve by More than Half
With the surge in usage requirements and increasing customer demands, hardware design is quickly becoming more complex. The rapid change in market trends, with a greater focus on technologies such as electric vehicles, dictates the demand for efficient power management and high-performance processing. Verification throughput continues to be a bottleneck as SoC designs increase in size, and so do the complexities. Adding more CPU cores and running more tests in parallel does not scale sufficiently. All this adds to the strain on verification engineers in verifying such complex designs. Verification is never complete; it is over when you run out of time. The goal is to make the verification process converge before you run out of time. Everyone wants to see key metrics converge to target goals and do so within stringent cost and time constraints. Imagine sitting in a cockpit, feeding inputs to the blackbox, and waiting for the magic to happen (press a button, and your job is done). The need of the hour is how artificial intelligence and machine learning (AI/ML) can help us get our regression faster, help save debug time, meet our verification/coverage goals, and manage resources and money—in other words, how can we use AI/ML to increase efficiency in verification? Renesas, a pioneer in semiconductor design, was facing similar challenges. Market pressure and strict tapeout schedules pushed them to look for a technology/methodology to optimize simulation regressions and accelerate the design verification process throughout product development. They wanted to reduce the risks, find as many bugs as possible early, be able to debug fast, and meet the demands of their end users. Renesas started exploring the Cadence Xcelium Machine Learning App. This app uses machine learning technology and optimizes simulation regressions to produce a well-condensed regression. This optimized regression was then used to reproduce almost the same coverage as the original regression and to find the design bugs quickly by simulating the corner case scenarios possible with the existing randomized testbench. Renesas achieved excellent results and saved 66% of their complete random verification regression cycle. This was a considerable saving of resources, cost, and time. The Xcelium ML App helped them achieve 2.2X compression with 100% coverage score regain. Furthermore, when using the ML regression on the first derivative, Renesas achieved a 3.6X reduction with 100% coverage score regain. The ML regression runs (1168) = 1/3 original regression runs (3774). This helped them get ahead of the curve 30% faster and meet the time-to-market demands. In addition to saving resources and time and accelerating coverage closure with the Xcelium ML App, they evaluated Cadence’s Verisium AI-Driven Verification Platform, including three Verisium Apps, boosting up to 6X in verification productivity. Renesas was able to save ~ 27 work hours. Renesas evaluated the following apps: Verisium AutoTriage, which is an ML-based automated failure triage. It automatically groups tests failing due to the same underlying bug. Renesas saw a 70% reduction of effort in triage, translating to a 3.3X efficiency improvement. Verisium SemanticDiff helped Renesas quickly identify causes of failure much more efficiently than conventional diff tools. The results from SemanticDiff focus on the context and thus provide a coherent analysis of differences. Otherwise, it is cumbersome for any engineer to check the log file of diff commands and go line by line. Using this app, the user achieved a massive reduction in debug time and remarkable efficiency improvement. Verisium WaveMiner helped identify difference points efficiently, where the user can visualize the differences in waveforms between PASS and FAIL cases, making it convenient for the verifier to compare between pass/fail waveforms/source code. Renesas saw an 89% – 97% reduction in debug time, translating to a 9X efficiency improvement. Together, Cadence’s Verisium Platform and Xcelium ML App provide a suite of applications leveraging AI/ML to optimize verification workloads, boost coverage, and accelerate root cause analysis of design bugs on complex SoCs. Renesas leveraged the AI platform and increased their verification productivity by up to 10X.
Feb 6, 2023
DATE 2023 in Antwerp Preview
Design and Test Europe (DATE) is coming up in April. It will be in person and it will be in Antwerp. It will take place on 17th to 19th April in the Flanders Meeting and Convention Centre Antwerp (FMCCA). The detailed program is now available . A word of warning: don't print this sight-unseen, it is 170 pages long. Note that all times given in this post are Central European Time (CET), the local time in Antwerp. I'm the publicity/press chair again this year (along with UK-based Alec Vogt of Synopsys). I asked General Chair Ian O'Connor (who has a Scottish first name, an Irish last name, and is a professor at the École Centrale de Lyon in France, where he is Professor for Heterogeneous and Nanoelectronics Systems Design in the Department of Electronic Electrical and Control Engineering) why people should attend DATE this year. H e told me: After three years of online editions due to COVID-19, DATE needed to put interaction, as well as reinforcing and rebuilding links in the community, at the heart of the event and so the 2023 edition has a substantially reworked format. The main thing that attendees will notice is that the majority of regular papers will be presented in a new format of technical sessions centering on live interactions: about 30 minutes of paper pitches, then an hour of interactive presentations around individual stations for presenting the work – the more creative the better! Also new this year are two "Late Breaking Results" sessions with breakthrough approaches, research directions, and results, and two "Unplugged" sessions around the "Digital-X" theme – these will feature direct exchanges to formulate timely challenges as problems and find inspiration for solution approaches. We also have new Special Day themes on Human-AI Interaction and Personalized Medicine, and more concentrated formats for workshops and embedded tutorials, among many other novelties. Overall, we have condensed DATE to three days – and we want to make them count! We hope that in this way the community can actually do what DATE is for: meeting, discussing, and exchanging on the latest progress in design and design automation. Keynotes Of course there are keynotes. Monday On Monday at 9am, Edith Beigné of Meta Reality Labs US will present Building the Metaverse: Augmented Reality Applications and Integrated Circuit Challenges . She is followed at 9.45am by Dirk Elias of Bosch, Germany, who will present The Cyber-Physical Metaverse — Where Digital Twins and Humans Come Together . Over lunch on Monday, at 1pm, is the Distinguished Lecturer Lunchtime Keynote. Jan Rabaey of UC Berkeley and imec will present The Future of Desig n. Then, at 5.30pm (still on Monday) is the Career Fair Keynote where Dragomir Milojevic of imec will present Design Methodologies and Tools for Technology-Aware Design of 3D Integrated Circuits . Tuesday Over lunch on Tuesday, at 1pm, Catherine Pelachaud of CNRS-ISIR, Sorbonne Université, will present Interacting with Socially Interactive Agent . Wednesday Over lunch on Wednesday at 1pm, Metin Setti of the Max-Planck-Institute for Intelligent Systems in Germany will present Mobile Microrobotics . Workshops There are 6 workshops during DATE 2023. Note that the numbers of the workshops are not in chronological order. W01 Eco-ES: Eco-design and circular economy of Electronic Systems on Monday 2pm to 6pm. W02 3D Integration: Heterogeneous 3D Architectures and Sensors on Wednesday 2pm to 6pm. W03 Workshop on Nano Security: From Nano-Electronics to Secure Systems on Tuesday 8.30am to 12.30pm W04 3rd Workshop Open-Source Design Automation (OSDA 2023) on Monday 2pm to 6pm. W05 Hyperdimensional Computing and Vector Symbolic Architectures for Automation and Design in Technology and Systems on Wednesday 8.30am to 12.30pm. W06 Can Autonomy be Safe? on Tuesday from 2pm to 6pm. Young People Programs As usual, there are various events during DATE that are targeted at young people, largely meaning PhD students or students who recently graduated from a PhD program. One of the organizers is Cadence's Anton Klotz, who runs the European node of the Cadence Academic Network. I talked to him to get some more color: This year the Young People Programme (the organizers are insisting on British English, so it is Programme) is again full with various events, like Career Fairs for Academia and Industry careers paths, keynote from Professor Dragomir Milojevic on job opportunities due to More Moore and More than Moore progress and career panel with Young Professionals from sponsoring companies. Additionally there will be Student Teams fair, where teams, which are designing systems like racing cars, hyperloop pods or hydrogen-powered planes can present their work and apply for sponsorship and attend a workshop on a topic, which is useful for design of such systems. I'm not sure if I qualify as an "old person", but I certainly don't qualify as a "young person". I got my PhD over 40 years ago! But if you are a PhD student, either studying or recently graduated, then don't miss these events. Register To attend, you must register by March 31st. After that, you can show up and register onsite, but there is an additional €50 due. If you register by 16th February, the fees are: €669 for paper authors and IEEE/ACM members, €459 for students, and €769 for normal admission. After 16th February, the fees go up to: €779 for authors and IEEE/ACM members, €549 for students, and €879 for everyone else. There are also single-day tickets available. Included in your registration are: Access to all sessions, Embedded Tutorials, and Workshops (the last two must additionally be selected in the online registration form) Entrance to the DATE Party (Tuesday evening, 18 April 2023) – except for the student rate (students can purchase one party ticket at a 50% reduced price online) Refreshments during coffee breaks and light lunch during lunch breaks Sign up for Sunday Brunch, the weekly Breakfast Bytes email. .
Feb 6, 2023
Sunday Brunch Video for 5th February 2023
https://youtu.be/BgbCT-QcNBo Made at DesignCon (camera Steve Brown) Monday: The Chiplet Summit Tuesday: Scaling to One Million Cores on AWS Wednesday: Where Is My Flying Car? Thursday: Design in a System Context Friday: Chiplet Summit: Challenges of Chiplet-Based Designs Featured Post: Opra Turbines: Gas Dispersion Analysis and Explosion Protection of a Gas Turbine .
Feb 5, 2023
Chiplet Summit: Challenges of Chiplet-Based Designs
I wrote the first post, The Chiplet Summit , from the recent Chiplet Summit in San Jose, If you have not seen that, you should probably read it first. A leitmotiv of the conference was: Moore's Law is dead. All we have left is packaging. As I said in the final summary paragraph of my earlier post: The situation today is that single-company multi-chiplet designs are shipping in volume, tentative steps are being made with some chiplets to build ecosystems of partners around them, and the dream of a chiplet store is sufficiently far off as to remain a dream for the time being. Today, I want to look at the technical issues that will require solutions to be able to do chiplet-based designs with chiplets from multiple companies who did not pre-plan making those specific chiplets work together. The analogy is how you can buy chips from different manufacturers and put them together on a PCB and have a working system, even though the companies that designed the chips never planned that specific system. 2.5D vs 3D First, a clarification. We are talking about putting chiplet-based designs together on an interposer (either silicon or organic). We are not talking about true 3D design where multiple die are stacked on top of each other. Designs like this are shipping (for example, Sony's image sensors have a three-die stack with logic, memory, and the sensor itself). However, stacking multiple die typically requires through-silicon vias (TSVs), so the die need to be very carefully designed so that everything aligns. I think it will be a very long time, if ever before you can expect die from different vendors to stack in true 3D fashion. For now, any true 3D die stacks will be designed by a single company partitioning a large design into multiple die. There are also major thermal challenges, not just challenges aligning all the TSVs. See my post, Design Enablement of 2D/3D Thermal Analysis and 3-Die Stack for some preliminary work that Cadence is doing jointly with imec on just these topics. So for the rest of this post (and for the foreseeable future) chiplet-based designs means 2.5D designs. Interchange Formats If you are going to do a chiplet-based design, your design tools need to be able to read in something that describes the important aspects of the chiplets. There are two important standardization efforts. First, TSMC announced 3Dblox at OIP last October. For details, see my post, TSMC OIP: 3DFabric Alliance and 3Dblox . 3Dblox is an open standard which (quoting from that post): 3Dblox provides generic language constructs capable of representing all current and future 3D-IC structures Modularize the 3D-IC structures such that EDA tools and design flow can be simpler and efficient Ensure standardized EDA tools and design flows are compliant to TSMC 3DFabric technology I won't say more about 3Dblox here. Read the earlier post for a deeper dive into some of the details. Cadence's tool portfolio supports 3Dblox (all the green dots in the above table). The second standard is called CDXML, which stands for Chip Data Exchange Markup Language. This standard was developed by OCP, the Open Compute Project Foundation. On the first day of the Chiplet Summit, it was announced that JEDEC is working with OCP on this standard, and it will be incorporated into JEP30 , JEDEC's Part Model Guidelines. At least for now, then, we have two open standards: 3Dblog and CDXML. Communication Standards I covered communication standards in my first post from the summit. But the summary is that there are two viable standards right now, Bundle-of-Wires (BoW), which is being used for designs in progress today, and UCIe (Universal Chiplet Interconnect Express), where IP is coming to market. See my post, UCIe PHY and Controller—To Die For . However, the UCIe standard was regarded by participants at the summit as "not yet completely ready," although with Intel, AMD, Arm, Google, Meta, Qualcomm, and more behind it, it is expected to be the eventual winner. Known Good Die Packaging multiple chiplets into a single package is different from just using a single die. If you are using a single die, there is a tradeoff between the cost of a package and the cost of doing a good job of wafer sort. Testers are expensive, and so doing "too good" a job of testing the die before the wafer is diced is wasting money. Of course, packages cost money too, so you don't want to waste too many of them.— but when you do waste a package because the die was bad, you are not wasting a die since it was already bad. The economics are completely different with multiple die in a package. If a die escapes wafer sort and is bad, then when it is packaged with all the other die, you are not just wasting a single bad die (and the cost of the package), you are wasting all the good die in the same package too. Plus the cost of a package for multiple chiplets is a lot higher than a package for a single chiplet. There is thus a major premium on testing each die thoroughly before it enters the assembly process. These die are known as KGD for Known Good Die. For more details, see my post Known Good Die . There are some things that can be done to optimize the packaging process, such as planning to be able to test a package with only some of the die already inserted. This allows the cheap die to be put in early and then tested, and then the expensive die (like a CPU or GPU in the most advanced node) to be put in at the end. This avoids the problem of a very expensive part being sacrificed due to the failure of a very cheap part. Testing Testing multi-chiplet designs (and even true 3D designs) is covered by IEEE 1838-2019 - IEEE Standard for Test Access Architecture for Three-Dimensional Stacked Integrated Circuits . For more details, see my post, IEEE 1838: Taking Test into the Third Dimension . This covers how to test designs when all you have access to is the package pins. Security There are a lot of issues around security. You probably know that the modern way to handle security is with a hardware root of trust. For an example, see my post, OpenTitan: Secure Boot with a Silicon Root of Trust or Putting the Bad Guys in an Arm Lock . With a chiplet-based system, the first thing that you need to decide is whether you trust all the chiplets, or if there is a possibility that a bad guy has somehow compromised one or more of the chiplets that you are acquiring from quasi-strangers. The next thing you need to decide is whether to have one chiplet in charge of security (containing the secure enclave with the keys, etc.) which then validates that all the other chiplets are secure. If a lot of the chiplets contain microprocessors that need to be booted, then this can be handled centrally, or perhaps, each chiplet has to handle its own secure boot. As Scott Best of Rambus Security pointed out, a 5nm design is so complex it is close to impossible to design, never mind reverse-engineer. But a chiplet-based design is easier: When you break it down into chiplets, the SiP is only as good as the least secure chiplet. To make it worse, while it is pretty much impossible to monitor a lot of internal signals on a 5nm chip, monitoring the signals on the interposer in a multi-chiplet design is much more feasible. In practice, this means that communication between the chiplets of anything security-related needs to be encrypted. Of course, since these chiplets were never designed specifically to work together, this is not simple. The usual way to handle this is with some form challenge-response, but this needs to be designed into each chiplet. In practice, there will need to be some sort of security standard for chiplets developed. Oh, and don't forget about side channels, such as Differential Power Analysis (DPA). If you don't know what that is, see my post, EDPS Cyber Security Workshop: "Anything Beats Attacking the Crypto Directly." Or glitching the power supply or the clock. See my post, Black Hat: Glitching Microcontrollers . Finger Pointing What happens when there is a failure? How do you find which chiplet is responsible, given that you probably don't understand all the internal details of all the chiplets you purchased? Although some people regard this as a big problem, I'm not sure it is all that different from working out which IP block is responsible for an SoC failure or even which chip on a board is responsible for a board-level failure. One approach is to anticipate that this might happen and have ways of enabling and disabling various aspects of the system. In a microprocessor, these are known as "chicken bits" (Google insists I meant "chicken bites" and provided me with lots of recipes). Special Markets Two almost random things that came up during the summit that I don't really have anywhere else to put. Supercomputers, the very highest end of HPC, have almost always used COTS parts, "commercial off-the-shelf" parts like Intel/AMD CPUs, NVIDIA GPUs, FPGAs, and so on. As Lawrence Berkeley Laboratory's John Shalf said: We know we cannot afford to spin our own chips from scratch. So for him, chiplets are an opportunity. They can use commercial chiplets (COTC?) and integrate them tightly into systems. The second random thing is automotive. The automotive industry is negative on chiplets since the mechanical issues associated with all the vibration in a car can lead to reliability problems. And remember, cars are expected to work for twenty years. On the other hand, autonomous driving is going to hit up against the reticle limit like everything else is doing, and so the industry may as well "bite the bullet" since they will need to use chiplets anyway. As it happens, the leader of the Cadence Academic Network in Europe, Anton Klotz, was at an automotive conference this week and received a similar message: The volume of autonomous driving chips is not sufficient to justify the costs. By combining chiplets from different vendors on one interposer the total costs are expected to go down Chiplets are much more energy efficient than PCBs; therefore this kind of integration is needed in order to increase the range of E-cars and still offer maximum performance Read More Cadence obviously has a whole portfolio of products to design chiplets. But it also has products focused on doing chiplet-based designs. See my posts: Brian Jackson Introduces a Mystery Product at IMAPS (Shh, It's OrbitIO) Introducing the Integrity 3D-IC Platform for Multi-Chiplet Design Using Clarity 3D Solver to Analyze 3D Packaging EDPS: When Chips Become 3D Systems and the Challenges of 3DHI John Park's Webinar on Chiplets Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
Feb 3, 2023
Xvisio Is Helping Build the World a Metaverse
Xvisio's slogan is "Innovating machine perception capability beyond human capacity." A cross-border technology startup, their expertise lies in visual simultaneous localization and mapping (vSLAM)-based core spatial technology for robotics and interaction technology. With the help of Cadence, they're providing the building blocks for the metaverse. Xvisio offers two main product lines. The first one, the SeerSense, is a sensor fusion camera module system that can be used for robotics, AGV navigation, and even factory automation. The second product is SeerLens, an AR and VR headset that provides all the sensor fusion tasks required for AR devices. It's great for both work and entertainment, and it serves as a building block for anyone who wants to build their own AR device. For their AR devices, Xvisio has been using Cadence's Tensilica Vision Q7 and Q8 DSPs as well as Xtensa SDKs. Tensilica DSPs being widely used by chip vendors, as well as the user-friendly development environment, is what incentivized Xvisio to use it in the first place. Because the efficient, unique, and high-quality Tensilica tools helped them effectively deploy the vSLAM technology, they're looking forward to using it for future development. "Designed with Cadence" is a series of videos that showcases creative products and technologies that are accelerating industry innovation using Cadence tools and solutions. Learn more about how Xvisio is helping build the world a metaverse with Cadence. For more Designed with Cadence videos, check out the Cadence website and YouTube channel .
Feb 2, 2023
Design in a System Context
If you ask pretty much anyone what Cadence does, the first thing they are likely to mention is providing EDA tools for designing chips. But we actually have a much broader portfolio than that, in particular at the higher level where you are designing systems. Of course, systems contain chips. But the focus of this post is not the chip design tools, but the tools and IP focused on putting systems together and verifying that they will work correctly. I'm not going to focus on it today, but another big trend in building the most advanced systems is to build them out of chiplets. At last year's HOT CHIPS, I would guess that well over half of all the designs presented were based on chiplets. Recently, in San Jose, there was the First Annual Chiplet Summit, which I covered in several posts (with some more to come). See: The Chiplet Summit Chiplet Summit: Challenges of Chiplet-Based Designs Another sign of that trend is that Intel Foundry, which has access to Intel's advanced packaging (EMIB and Foveros) is positioning itself as "ushering in the era of the system foundry." Patrick Gelsinger, Intel's CEO, said: IFS will usher in the era of the systems foundry, marking a paradigm shift as the focus moves from system-on-a-chip to system in a package. As the level of abstraction moves up from chip to system, to end-market, it becomes much more differentiated. In many ways, all chips are the same and require the same primary set of design tools. But a smartphone has very different requirements from an automotive chip in many dimensions: cost, reliability, lifetime, temperature range, and so on. This means that the portfolio of tools and IP that are important is more dependent on the application. So let's look at the Cadence portfolio for several different market segments: datacenter, automotive, mobile, and industrial internet-of-things (IIoT). Obviously, there are lots of tools and IP that are applicable to any market. Almost every chip will use Virtuoso at some point for either analog design or chip finishing. Every design will use Xcelium for SystemVerilog simulation. And so on. The focus of today's post is on the tools that are especially applicable to designs in different segments but less applicable to a generic design. That's actually an oxymoron since nobody designs a chip if it is generic, but you know what I mean. Datacenter Datacenter designs are the heart of the market segment, usually called HPC, for High-Performance Computing. I see the market as split into several segments: x86: Using Intel and AMD parts to build x86-based processors Arm: Building Arm-based datacenter chips such as AWS's Gravitorn series, or Ampere's Altra designs RISC-V: This is less mature but is coming up fast in everyone's rearview mirror. Using the highest performance out-of-order RISC-V processors to build chips/chiplets for datacenters (for example, Ventana, Esperanto, or Tenstorrent) Accelerators, especially for training neural networks. NVIDIA is dominant today, but VCs have funded an extraordinary number of AI startups (many of which will not make it since there are way more than the market can support, but that's a topic for another day) Cadence has a lot of IP that are foundational in this market: Arm ServerReady/SystemVIP PCIe 6.0 and derivatives like CCIX and CXL UCIe 1.0 A whole portfolio of memory interfaces that I'll just call xDDRy as shorthand for interfaces like DDR5, LPDDR4, GDDR6, and so on Ethernet I don't think we have any design tools that are specifically for datacenter designs, but we do have some tools that are applicable to high-performance large designs: Integrity 3D-IC for chiplet-based designs Cerebrus Intelligent Chip Explorer: For getting the physical design of these chips completed faster and with higher quality (PPA) The Dynamic Duo, Palladium Z2 and Protium X2: To get complex hardware debugged and software developed Helium Virtual and Hybrid Studio: Supports all of x86, Arm, and RISC-V, to enable software development based on a virtual platform Xcelium ML and Jasper ML: Adding machine learning to verification to enable the design to be verified in a shorter time Clarity 3D solver: For analyzing the high-performance signals through package, PCB, connectors, backplanes, and more Celsius Thermal Solver: High-performance chips generate a lot of heat, and you need to make sure it doesn't cause thermal issues 6SigmaDCX: Future Facilities' product for modeling thermal at the entire datacenter level, along with physical/thermal models of pretty much anything you might want to put in a datacenter Automotive For automotive, we do have tools specifically for the market segment. Well, technically, for any safety-critical application, but automotive towers above all the others: Midas Safety Platform: For Functional Safety (FuSa) and compliance with ISO 26262 Xcelium Fault Simulation: To verify that whatever goes wrong it will get noticed Features in the digital full flow for safety, such as creating safety islands isolated from other parts of the design Fidelity CFD for the design of fluid flow in and around the vehicle Tensilica Processors from HiFi (for infotainment), Vision (for processing camera data), and ConnX (for radar and lidar processing) Tensilica Processors. Yes, again, because I want to emphasize compliance with ASIL-D The Dynamic Duo: Palladium Z2 and Protium X2, for developing the central decision-making chips and the associated software, which I've seen estimated as 100M loc for autonomy Automotive Ethernet with TSN (time-sensitive networking) Mobile Mobile has some special requirements, most obviously radios. The chips are not the biggest or the fastest, but they are very constrained. They have to live in small packages in small enclosures (phones), which makes thermal issues especially important. If we expand our horizon from the smartphone itself to include things like earbuds, the physical constraints are even more severe. Celsius Thermal Solver (along with Voltus) for analyzing all the thermal issues AWR RF Design for designing radios and antennas Virtuoso for analog design for the chips other than the application processor (AP), such as power management, networking, sensors, and so forth Tensilica HiFi for high-quality sound processing (audio) and voice-activated functionality Industrial IoT at the Edge Industrial IoT is very varied, and so there is nothing I can think of that is essential to every IIoT system (apart from the basic chip-design tools that every chip requires). But a few IP blocks and tools worth calling out are: Tensilica Floating Point DSP for low-power AI and machine learning Tensilica Vision processors for camera-based applications Tensilica AI Platform for decision making Xcelium Fault Simulation for the most safety-critical functions (such as keeping human operators safe from robots they are operating in close quarters) Ethernet PHY and controller IP (or perhaps AWR RF Design for any applications that use wireless connectivity) Cadence's cooperation with Dassault Systèmes for collaboration on systems involving both chips and complex mechanical requirements, complex bills-of-materials (BoMs), and digital twins Learn More I should put a huge list of links here to all the tools mentioned above. But you can find any of them easily, in my experience, by searching for "Cadence" plus the name of the tool. For example, searching for "Cadence Celsius" will immediately take you to the Celsius Thermal Solver product page as the first result on the page (and it is not a sponsored link, just what the SEO pros call "organic search"). Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
Feb 2, 2023
On-Demand Webinar - How to Efficiently Mesh Complex Tip Blades with High Accuracy
Register only once to get access to all Cadence on-demand webinars. Tip clearance flow is the flow that passes through the small clearance gaps between rotating and stationary components of a turbomachine. The size and shape of such gaps are directly linked to the turbomachine’s aerodynamic performance in terms of efficiency, operating range, vibrations, noise, etc. Accurately modeling and predicting this flow enables engineers to test many design alternatives and obtain optimal solutions. Computational fluid dynamics (CFD) simulations are crucial for such simulations. In this on-demand webinar, watch how we mesh a turbine blade with complex tip geometry in the Cadence Fidelity CFD Platform, capturing all its intricate geometric details for both fluid and solid domains. We complete the analysis with a conjugate heat transfer (CHT) simulation using Fidelity CFD’s pressure-based solver that can simultaneously tackle the solution of momentum, continuity, and energy equations. Blade tip surface grid with ASM Blade tip CHT vs Adiabatic static temperature Learn how Fidelity CFD can simulate complex turbomachinery configurations, maintaining high-accuracy flow results and a quick turnaround time for a short engineering product cycle. Register only once to get access to all Cadence on-demand webinars.
Feb 2, 2023

 

Chalk Talks Featuring Cadence

Faster, More Predictable Path to Multi-Chiplet Design Closure
The challenges for 3D IC design are greater than standard chip design - but they are not insurmountable. In this episode of Chalk Talk, Amelia Dalton chats with Vinay Patwardhan from Cadence Design Systems about the variety of challenges faced by 3D IC designers today and how Cadence’s integrated, high-capacity Integrity 3D IC Platform, with its 3D design planning and implementation cockpit, flow manager and co-design capabilities will not only help you with your next 3D IC design.
Mar 3, 2022
40,993 views
Enabling Digital Transformation in Electronic Design with Cadence Cloud
With increasing design sizes, complexity of advanced nodes, and faster time to market requirements - design teams are looking for scalability, simplicity, flexibility and agility. In today’s Chalk Talk, Amelia Dalton chats with Mahesh Turaga about the details of Cadence’s end to end cloud portfolio, how you can extend your on-prem environment with the push of a button with Cadence’s new hybrid cloud and Cadence’s Cloud solutions you can help you from design creation to systems design and more.
Mar 3, 2022
41,063 views
Machine-Learning Optimized Chip Design -- Cadence Design Systems
New applications and technology are driving demand for even more compute and functionality in the devices we use every day. System on chip (SoC) designs are quickly migrating to new process nodes, and rapidly growing in size and complexity. In this episode of Chalk Talk, Amelia Dalton chats with Rod Metcalfe about how machine learning combined with distributed computing offers new capabilities to automate and scale RTL to GDS chip implementation flows, enabling design teams to support more, and increasingly complex, SoC projects.
Oct 14, 2021
42,872 views
Cloud Computing for Electronic Design (Are We There Yet?)
When your project is at crunch time, a shortage of server capacity can bring your schedule to a crawl. But, the rest of the year, having a bunch of extra servers sitting around idle can be extremely expensive. Cloud-based EDA lets you have exactly the compute resources you need, when you need them. In this episode of Chalk Talk, Amelia Dalton chats with Craig Johnson of Cadence Design Systems about Cadence’s cloud-based EDA solutions.
May 8, 2020
53,906 views

 

Featured Content from Cadence

featured video
How to Harness the Massive Amounts of Design Data Generated with Every Project
Long gone are the days where engineers imported text-based reports into spreadsheets and sorted the columns to extract useful information. Introducing the Cadence Joint Enterprise Data and AI (JedAI) platform created from the ground up for EDA data such as waveforms, workflows, RTL netlists, and more. Using Cadence JedAI, engineering teams can visualize the data and trends and implement practical design strategies across the entire SoC design for improved productivity and quality of results.
Nov 11, 2022
24,706 views
featured video
Get Ready to Accelerate Productivity and Time to Market with Overnight Chip-Level Signoff Closure
Is design closure taking too long? Introducing the Cadence Certus Closure Solution, a revolutionary new tool that optimizes and delivers overnight design closure for both the chip and subsystem. Learn how the solution supports the largest chip design projects with unlimited capacity while substantially improving productivity by up to 10X.
Oct 21, 2022
24,910 views
featured video
Butterfly Network Puts 3D Ultrasound on a Chip with Cadence
About two-thirds of the world’s population lacks access to medical imaging, whether in developing nations or in first-world countries with underserved communities. Driven by a vision of improving the standard of healthcare around the world, Butterfly Network designs and makes the first 3D Ultrasound imaging system and whole-body imager that's small enough to be carried in your pocket.
Oct 19, 2022
22,878 views
featured video
Leverage Big Data and AI to Optimize Verification and Productivity Across an Entire SoC
Optimize your verification workload, boost coverage, and accelerate root cause analysis to reduce silicon bugs and accelerate your time to market with Cadence Verisium AI-Driven Verification. Learn how this generational shift from single-run, single-engine algorithms to algorithms leverages big data and AI across multiple runs of multiple engines throughout an entire SoC verification campaign.
Oct 17, 2022
25,024 views