Subscribe Now

Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality.


Cadence Blog – Latest Posts

Scaling to One Million Cores on AWS
At CadenceLIVE Europe last year, Ludwig Nordstrom of AWS presented Scaling to 1 Million+ Core to Reduce Time to Results, with up to 90% Discount on Compute Costs . I think that there are currently two trends in EDA infrastructure that cut across almost all design tools. They are adding AI and machine learning (ML) to tools. And switching from running on a single enormous server to running massively parallel in the cloud on fairly normal configurations. There is actually a play in AWS for AI/ML, too, for those design tools that can take advantage of GPU since AWS has instances with attached NVIDIA GPUs. But this presentation was more about scaling and some practical advice about how to keep costs under control when you scale massively. He started off by addressing why you might use the cloud in general and AWS in particular for EDA. In the front-end part of the design cycle, there are lots of jobs. For example, verification requires millions of simulation runs, many of which are quite short. The most extreme example is library characterization, where each standard cell crossed with each process corner is its own job. Each job is independent, apart from competing for the same resources. It is comparatively straightforward to scale to enormous numbers of machines. On the other hand, in the back end, there are huge jobs. But under the hood, many (most) Cadence tools have been rearchitected to take advantage of large numbers of machines. For example, timing signoff of a large design using Tempus can scale. In fact, as Ludwig said, and I would agree with him, the cloud is becoming the standard signoff platform. It is also the standard physical verification platform for Pegasus. Flying horses in the cloud, or something like that. One thing about the cloud is that compute hours are fungible. This means that running a job for 10 hours on 1000 CPUs has the same cost as running on 10,000 CPUs for just one hour. For the front end (lots of small EDA jobs) this also scales. The backend scales too, but not quite so nicely. For example, Tempus or Pegasus doesn't scale completely linearly from 1 to 10,000 CPUs. AWS has various levels of service, and they are associated with very different prices. These range from the most expensive, known as "on-demand" where you pay for compute capacity by the second with no long-term commitments. At the other extreme are "spot instances." These come with savings of up to 90% off the on-demand prices. The instances are just the same, but the downside is that your jobs may be pre-empted at short notice. Looking from AWS's point of view, this is a way of renting out spare capacity at a big discount, but with the capability to get the capacity back again if a higher paying customer shows up. EDA workloads are set up to handle server failure, and being pre-empted and kicked off a server appears almost exactly the same. Spot instances come from spot pools. Each instance family, in each instance size, in each availability zone, in each region, is a separate spot pool with a separate price. It might sound like spot instances are impossible, with them coming and going all the time. But, in fact, less than 5% of spot instances were interrupted in the previous three months. Of course, you can only use spot instances for workloads that can handle interruptions. For short-running jobs, they can simply be restarted if pre-empted. For long-running jobs, checkpoint and retry strategies, or something similar, are required. AWS also makes available Cyclone. This is not an AWS service. It is an open-source community-supported solution. Cyclone is a high-performance HPC scheduler. It integrates with AWS batch, EC2, and Slurm across AWS regions and on-prem to create superclusters. Cyclone lets customers leverage the 25 AWS regions along with on-prem capabilities to scale their compute clusters. Cyclone brings the benefits of global scale, diversifying across all spot pools globally. It is smart enough to prioritize regions with lower spot costs and will leverage the available capacity across all regions without having to retry jobs. Global scale lets you use the instance types that work best for your jobs and still get the scale you need. Ludvig had an example from the Max-Planck Institute. Max Planck provisioned 4K+ EC2 instances to run 20 000 jobs with up to 7 hours runtime each on-spot for Drug Discovery using Cyclone configured for six AWS regions. The result: Using more than 4,000 instances, 140,000 cores, and 3,000 GPUs around the globe, our simulation ensemble that would normally take weeks to complete on a typical on-premises cluster consisting of several hundred nodes, finished in about two days in the cloud So a few weeks become a few days. What's not to like? Learn More There are lots of Breakfast Bytes posts about scaling into the cloud. Here are just a few: CadenceLIVE: Pegasus on AWS, Let Physical Verification Fly CadenceLIVE: Characterizing Libraries with Liberate and CloudBurst Barefoot in a CloudBurst: Tempus on 2000+ CPUs Climbing Annapurna to the Clouds AWS: Amazon's Own Experience with EDA in the Cloud Liberate Trio on AWS/Graviton2 Instances Sign up for Sunday Brunch, the weekly Breakfast Bytes email
Jan 31, 2023
Voltus Voice: Voltus-Innovus Integration Avoids Potential Power-Signoff Issues
Voltus TM IC Power Integrity Solution is a power integrity and analysis signoff solution that is integrated with the full suite of design implementation and signoff tools of Cadence to deliver the industry’s fastest design closure flow. The aim of this blog series is to broadcast the voices of different experts on how design engineers can effectively use the diverse Voltus technologies to achieve high-performance, accuracy, and capacity for next-gen chip designs. The age-old proverb “ A stitch in time saves nine ” highlights the benefits of prior preparations to yield significant savings in the future, or in our field, the power savings through digital design implementation. The more robust a power grid gets in the early design stage, the less prone it is to Crosstalk and IR-drop later at the signoff stage. But, how can this be achieved and how can one design and optimize a power grid without actual floorplanning and routing? Well, the Cadence Intelligent System Design strategy makes it possible, by providing our customers a seamless design flow during the Place and Route implementation all the way to silicon signoff. The 'Chip-2-System Power Signoff' video series help you understand how Voltus integrates with a wide breadth of key Cadence products to achieve faster system-level power integrity analysis and closure. The previous video in this series talked about Voltus integration with Sigrity technologies for robust co-analysis of chip, package, and board. In this video, you will be introduced to an Integrated Signoff Closure Flow involving a tight integration between Voltus and Innovus. Voltus, Cadence’s power and rail signoff tool, couples with Innovus, the physical implementation tool, to ensure power analysis is consistent and convergent throughout the design flow. This integration provides design engineers with a unique solution called Innovus Power Integrity (also known as IR-Aware Full Flow) that enables the interaction between the place and Route implementation and IR drop analysis signoff steps. This approach pulls the potential power signoff issues ahead into the design implementation stage, thereby allowing early feedback, avoiding various difficult and costly design fixes or changes at the signoff stage yielding a clean power integrity signoff. Several IR drop centric optimization technologies integrated in Innovus Power Integrity solution are: Power-Grid Reinforcement : This technique adds reinforcement PG stripes between existing PG stripes to fix the IR drop violations reported from Voltus Early Rail Analysis on P&R design : The Early Rail Analysis feature analyzes power-grid integrity early in the floorplanning stage, after placement, as well as postrouting IR Drop Aware Placement : The IR drop-aware placement method spreads high power density hotspots, and applies padding to high current density instances IR-Aware Clock Tree Synthesis : S kewClock IR Fixing method reduces peak current due to simultaneous register clocking This Cadence solution for power integrity analysis empowers our customers to: Do watch this second video of the ‘Chip-2-System Power Signoff’ video series to explore the solutions that Innovus-Voltus integration offers at the early stages of a design lifecycle. Related Resources RAK IR Aware placement using Innovus - Voltus Video Chip-2-System Power Signoff – Part 2: Voltus-Innovus Integration - YouTube AppNote Fixing IR drop violation using IR Aware Full Flow Blogs Voltus Voice: Accelerate Power Signoff and Design Closure with this IR Aware Placement Technology Voltus Voice: Accelerate Power Signoff and Design Closure with Targeted Local PG Addition Tips and Resources All Online Training courses are available for self-enrollment on the Cadence Learning and Support system. If you are embarking on your learning journey with Cadence, here are some tips and resources to get you started: To find information on how to get an account on the Cadence Learning and Support portal, see the Support . To view the recommended course flows as well as the complete learning plan, check out the Learning Map . To explore other Training Bytes videos, go to > Video Library > Learning > Training Bytes (Videos). If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training . You can become Cadence Certified once you complete the course(s) and share your knowledge and certifications on social media channels. Go straight to the course exam at the Learning and Support Portal. For more information on Cadence digital design and signoff products and services, visit . About Voltus Voice “ Voltus Voice ” showcases our product capabilities and features, and how they empower engineers to meet time-to-market goals for the complex, advanced-node SoCs that are designed for next-gen smart devices. In this monthly blog series, we also share important news, developments, and updates from the Voltus space. Subscribe to receive email notifications about our latest Digital Design blog posts.
Jan 31, 2023
The Chiplet Summit
Last week it was the Chiplet Design Summit in San Jose. Actually, the organizers called it the First Annual Chiplet Design Summit . Since everything was oversubscribed — not enough chairs in the keynote ballroom, not enough box lunches — this doesn't seem all that arrogant. And in fact, the date for next year's summit has already been announced. It will be January 23rd to 25th, 2024, still at the DoubleTree (although I wouldn't be surprised if it gets moved to a bigger venue). I will cover some of the presentations in more detail in future blog posts, but today I will focus on a few themes that ran through many of the presentations. The conference was structured as a day of tutorials on Tuesday and then keynotes on Wednesday morning, and the rest of the conference with several parallel technical tracks. One obvious question is whether it was worth attending, and, in particular, should you plan to attend next January. I thought it was excellent, especially the tutorial day, and so "yes" would be my answer to whether you should attend. If you have anything to do with system-on-chip (SoC) integration, where everything is on a single die, then you will get involved in chiplets in the future. That is not to say that there will be no monolithic integration anymore, but it is clear that for the most advanced nodes (3nm, 2nm, etc.), that only the part of the design that will benefit from the most advanced process will be designed in that process, and everything else will be put onto chiplets in older nodes, often called N-1 or N-2 nodes (so for 3nm, N-1 is 5nm, N-2 is 7nm). A decade or two ago, every presentation about EDA and design started off with a generic Moore's Law graph (and often a generic "design gap" slide). Well, Moore's Law may be dead or dying, but Gordon Moore still is the man to quote even in a chiplet conference because he wrote the following in that same Electronics Magazine article where he used four datapoints to predict that the number of transistors on a chip would double every couple of years. He actually said it would last for about ten years, but actually, it lasted over 50 years. His other quote from that same article is: It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected. Well, after 50 years, that day has arrived. Yole Group forecasts the chiplet-based semiconductor market to be over $205B by 2032. Samsung Foundry estimated that over 50% of advanced node designs are chiplet-based. The Story So Far The biggest chips are now so large that they either exceed the maximum reticle size for manufacture, or they are so big that they simply will not yield well. One example pointed out during the summit was that four 10x10mm die yield 30% more good die than a single 20x20 die. Pioneers in using chiplets to address this have often presented at HOT CHIPS in recent years, and Cisco's "Suds" Sudhakar revealed that Cisco has been working on chiplets for over a decade; it just didn't talk about it in public. The most public early interposer-based design was Xilinx which split a large FPGA into four smaller die on a silicon interposer. I think this was a proof of concept as much as anything, This happened before Breakfast Bytes started, but you can still read my 2013 post on the topic at Semiwiki. The very latest news on this topic was the keynote by Lisa Su, AMD's CEO, at CES in January. She announced (and showed) the Instinct MI300. As I said in the update post linked below: Make no mistake, the Instinct MI300 is a game-changing design - the data center APU blends a total of 13 chiplets, many of them 3D-stacked, to create a chip with twenty-four Zen 4 CPU cores fused with a CDNA 3 graphics engine and 8 stacks of HBM3. Overall the chip weighs in with 146 billion transistors, making it the largest chip AMD has pressed into production. See my posts: Xilinx and TSMC: Volume Production of 3D Parts (Semiwiki) HOT CHIPS: Chipletifying Designs HOT CHIPS Day 1: Hot Chiplets - Breakfast Bytes HOT CHIPS Day 2: AI...and More Hot Chiplets HOT CHIPS: Two Big Beasts Linley: Chiplets for Infrastructure Silicon Linley: Enabling Heterogeneous Integration of Chiplets Through Advanced Packaging with AMD/Xilinx 3D Heterogeneous Integration (3DHI) CES 2023: AMD, Stellantis, Cadence, and More and January 2023 Update: Automotive Security, Chiplets...and Roman Emperors! Companies like AMD and Intel have done fairly complex multi-chiplet designs. NVIDIA and Apple have both created designs where two big die are joined together using an interconnect bridge to make an even bigger design, Grace-Hopper in the case of NVIDIA, and Apple's M1 Ultra, which consists of two M1 Maxes. The thing that is common to all these chiplet-based designs is that they are done within a single company. The chiplets are designed to work together to build up a system, in many cases using proprietary interfaces. There is no technical sense, let alone commercial sense, in which, for example, someone other than AMD could use one of its chiplets. One of the themes that ran through the summit is that everyone wants to be able to go to the chiplet store with their supermarket cart and pick whatever chiplets they want off the shelf, and then be able to put together a system-in-package (SiP) that relies on them all working together. On the other hand, anyone who put forward any sort of timetable for this said "five to ten years." The one big exception to this is HBM, high-bandwidth memory. Nobody builds their own HBM, but there is a market (the various generations of HBM have been standardized by JEDEC). One of the panel sessions on the last afternoon of the summit was How to Make Chiplets a Viable Market . It was standing room only, which never happens for a panel session. I will cover it in a separate post next week. The intermediate case is that someone with a key chiplet, such as a processor, creates an ecosystem around it. In the panel session I just mentioned, Ventana said that it was doing just this since its datacenter processor is available as a chiplet. A processor cannot stand on its own (it cannot boot an operating system, for a start), so it has to be surrounded by other chiplets to create a full system. So the situation today is that single-company multi-chiplet designs are shipping in volume, tentative steps are being made with some chiplets to build ecosystems of partners around them, and the dream of a chiplet store is sufficiently far off as to remain a dream for the time being. Why Chiplets? The diagram above, from Denis Dutoit of CEA-List in Grenoble, shows one of the big motivations for using chiplets at the most advanced nodes. The straight diagonal line shows Moore's Law on the assumption it applies equally to logic, memory, and analog. The line that flattens out shows how scaling really works. Analog doesn't scale much, if at all, and memory scales much slower than logic. Indeed, it is unclear if 3nm memory is going to be any smaller than 5nm memory, which is the ultimate lack of scaling. When scaling operates like that, moving analog and large memories into the latest process node gains little in area and costs a lot more. The obvious response is, "Well, don't do that," and the way to not do that is to put the memory and analog on separate chiplets manufactured in less advanced processes (so potentially much cheaper). For example, AMD's famous Zen2 SiP has a varying number of processor chiplets (in 7nm, I believe) and an I/O chip built in 12nm FD-SOI. Another reason for putting I/O onto a separate chiplet when designing in a very advanced node is to avoid having test chips for the SerDes (Ethernet, PCIe, etc.) being on the critical path. If you put a SerDes on the most advanced node, you have to build a test chip and characterize the silicon before the real chip can tape out. It is much easier to use a SerDes that already exists and has seen silicon in an older node, or even, like AMD, in a completely different process technology. Chiplet Connectivity There are a number of interconnect standards (as well as some proprietary ones). For chiplet-based designs that are in progress, it seems that most of them use the Open Compute Project's (OCP's) Bunch of Wires or BoW . The other standard, which has a lot of heavyweights behind it, is UCIe. For more details on that, see my post when it was announced Universal Chiplet Interconnect Express (UCIe) , and then when we announced our product UCIe PHY and Controller—To Die For . There is a product page for the PHY and controller , which contains a summary of capabilities: The UCIe physical layer includes the link initialization, training, power management states, lane mapping, lane reversal, and scrambling. The UCIe controller includes the die-to-die adapter layer and the protocol layer. The adapter layer ensures reliable transfer through link state management and parameter negotiation of the protocol and flit formats. The UCIe architecture supports multiple standard protocols such as PCIe, CXL, and streaming raw mode. Some aspects of the UCIe standard are still in development, but I would say the received wisdom at the summit was that "UCIe will win once it is finished" given all the companies that are behind it. Chiplet Marketplace I will cover this in a separate post next week. Sign up for Sunday Brunch, the weekly Breakfast Bytes email. .
Jan 30, 2023
Limitless Export Formats with Fidelity Pointwise CAE Plugins
Fidelity Pointwise generates two-dimensional (2D) and three-dimensional (3D) grids that are crucial for most of the computer-aided engineering (CAE) analysis process. Grids are exported to an analysis software to simulate a design; therefore, the grid and boundary condition details must be formatted for the simulation software to read. Each computer-aided engineering (CAE) program has its unique file format. While some programs might use cell faces as their primary data type, others like to use cell volumes or nodes. Few put all their data into a single file, and others want separate files for nodes, faces, cells, and boundary conditions. Fidelity Pointwise currently exports 47 different CAE file formats and a variety of neutral file formats. Those 47 CAE formats are not the only ones; there are plenty of other formats out there! Need for CAE Plugins CAE plugins allow the addition of an exporter to Fidelity Pointwise for writing grid and boundary condition information to create user-specific formats. The complete CAE exporter is placed in Fidelity Pointwise's plugin folder, which automatically loads when Fidelity Pointwise starts up. The CAE software name appears as a selection in the CAE Select Solver panel (as shown in Figure 1). The available boundary condition types appear in the CAE, Set Boundary Conditions panel. When the grid is exported using the File, Export, CAE menu, it writes in the native format defined by the selected plugin. Figure 1. CAE menu provides options and tools for exporting your grid. Plugins are useful if a proprietary file format is not supposed to be shared with the general public. Share the plugins only with the users who are allowed to access them. Plugins are also useful for developers with their own CAE program and format to write their native files using Fidelity Pointwise. Create a CAE Plugin Developing a plugin is not for everyone. It requires some programming skills and a development environment on your computer. The overview of the plugin development process is elaborated in the Fidelity Pointwise User Manual. If Fidelity Pointwise is already installed, the user manual is accessed through the Help > User Manual menu. This manual provides an overview of the plugin development process, information about where to download the Pointwise CAE Plugin SDK (Software Development Kit) , and the process through an example plugin setup. To create a CAE plugin, refer to the Another Fine Mesh blogs below: Create a CAE Plugin - Part I Create a CAE Plugin - Part II Interoperability One of the best things about Fidelity Pointwise is that it fits well into any company's analysis process. Pointwise reads and writes a variety of geometry and grid formats, so you can import files from other design and analysis tools and export them to any other software to be used for the next step in the process. Fidelity Pointwise CAE plugins are an important part of this versatility since supporting a limitless number of CAE export formats is easier. It's just another way in which Fidelity Pointwise works better for all your design and analysis needs. If you want to generate meshes using Fidelity Pointwise and export them to a CFD solver using Fidelity Pointwise CAE Plugins, request a demo today! Watch the video below to learn how to export your grid to supported non-native formats:
Jan 30, 2023
Sunday Brunch Video for 29th January 2023 Made at Steamers, Los Gatos (camera Carey) Monday: Design Enablement of 2D/3D Thermal Analysis and 3-Die Stack Tuesday: IEDM: TSMC N3 Details Wednesday: Technology and the American Trucking Industry Thursday: HPSC: RISC-V in Space Friday: January 2023 Update: Automotive Security, Chiplets...and Roman Emperors! Featured Post: FMEDA-driven SoC Design of Safety-Critical Semiconductors .
Jan 29, 2023
January 2023 Update: Automotive Security, Chiplets...and Roman Emperors!
Wow, it's already the last Friday in January, so time for one of my monthly update posts where I cover anything that doesn't justify its own full post or which is an update to something I wrote about earlier. Automotive Security I have written about automotive security quite a bit. Here are a few posts: IEEE Computer Society: Automotive Cybersecurity Automotive Security: A Hacker's Eye View Have You Heard of ISO 21434? You Will Firstly, don't confuse automotive security with automotive safety, things like functional safety (FuSa) and ISO 26262. You need security to have safety. But security is its own thing. In a modern connected car, there are two places for security vulnerabilities. One is in the car itself. And the other is back at base in the automotive manufacturer's (OEM in the jargon) datacenters, which the cars are connected to. Well, it turns out automotive manufacturers are not very good at security in either place. The title of this blog post by Sam Curry pretty much says it all: Web Hackers vs. The Auto Industry: Critical Vulnerabilities in Ferrari, BMW, Rolls Royce, Porsche, and More . I think they chose those brands to put in the title because it makes for a more dramatic title than using Kia and Acura. But lots of mainstream brands are on the list too. He opens with an anecdote of why they decided to pentest automotive security: While we were visiting the University of Maryland, we came across a fleet of electric scooters scattered across the campus and couldn't resist poking at the scooter's mobile app. To our surprise, our actions caused the horns and headlights on all of the scooters to turn on and stay on for 15 minutes straight. That sort of thing is like a red rag to a security researcher bull: [We] became super interested in trying to more ways to make more things honk. We brainstormed for a while, and then realized that nearly every automobile manufactured in the last 5 years had nearly identical functionality. If an attacker were able to find vulnerabilities in the API endpoints that vehicle telematics systems used, they could honk the horn, flash the lights, remotely track, lock/unlock, and start/stop vehicles, completely remotely. At this point, we started a group chat and all began to work with the goal of finding vulnerabilities affecting the automotive industry. Over the next few months, we found as many car-related vulnerabilities as we could. The following writeup details our work exploring the security of telematic systems, automotive APIs, and the infrastructure that supports it. Most of the rest of the piece is a detailed description of the security vulnerabilities they found. The ones listed in the blog post title are not even the most severe, and lots of more mainstream manufacturers than Ferrari and Rolls Royce were vulnerable. To give you an idea of how serious these issues are, here's just one of the entries in the post: Kia, Honda, Infiniti, Nissan, Acura Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the VIN number Fully remote account takeover and PII disclosure via VIN number (name, phone number, email address, physical address) Ability to lock users out of remotely managing their vehicle, change ownership For Kia specifically, we could remotely access the 360-view camera and view live images from the car The VIN is the "vehicle identification number." At least here in the US, it is usually (always?) on a little embossed plate just behind the windscreen, visible to anyone from outside the vehicle. Also, the airline industry is just as bad. I won't go into the details, but the title of this post says it all: how to completely own an airline in 3 easy steps and grab the TSA nofly list along the way . By the way, the mainstream press has been reporting that the nofly list was kept in an Excel .csv file. I think it is much more likely that the nofly list is kept in a database that was not breached, but for some reason, someone dumped the list into a csv file to do some analysis in Excel, and it was that file that was compromised. AMD's 146B Transistor Processor At CES, the evening-before-the-first-day keynote was by Lisa Su, CEO of AMD. I wrote it up in my post CES 2023: AMD, Stellantis, Cadence, and More . One of the products she announced, and held up the chip, was the AMD Instinct MI300 Data Center APU. Paul Alcorn of Tom's Hardware got some time with AMD and managed to take some (not entirely successful) photos of it. Here's a succinct description of the design: Make no mistake, the Instinct MI300 is a game-changing design - the data center APU blends a total of 13 chiplets, many of them 3D-stacked, to create a chip with twenty-four Zen 4 CPU cores fused with a CDNA 3 graphics engine and 8 stacks of HBM3. Overall the chip weighs in with 146 billion transistors, making it the largest chip AMD has pressed into production. There's lots more in the article. See AMD Instinct MI300 Data Center APU Pictured Up Close: 13 Chiplets, 146 Billion Transistors . It's not quite up to Ponte Vecchio's level, with 47 chiplets, but that is only (!) 100 billion transistors. 3D heterogeneous integration is clearly the way of the future. See any number of previous posts. While on the subject of chiplets, this week was the Chiplet Summit in San Jose. I went along, and I will write up some posts on the topic in February. Robots I dropped Boston Dynamics' latest video into my preview of DesignCon since it is giving one of the keynotes. A day later it came out with a new video: Perhaps more interesting to us engineers is the second video about how they made this one. Energy As you know, I'm very critical of journalists in how they cover many things, especially energy. My pet peeve is when articles confuse KW (a flow) and KWh (a quantity). My most recent rant on the topic was in my post Moss Landing . Journalists writing on economic topics often make the same mistake when they confuse income (a flow) and wealth (a quantity). If you have a lot of wealth, you are rich. If you have a lot of income, you can become rich. If you read the mainstream press, you might assume that we are not far from a net-zero world and will have no problem in getting there by 2050. I don't believe it, and you can read more details about why in my post Earth Day: What Will It Take to Get to Carbon Neutrality by 2050? OurWorldInData recently published a graph showing the trends from 1960 to the present day, and it is quite sobering. See How have the world’s energy sources changed over the last two centuries? If you go to that website, the graph is actually interactive. Below is just a download of the entire graph as a single image. Western countries may be shutting down coal-fired power stations (although Germany is reopening coal mines since it is desperate for energy) but China and India are not. In fact, it is probably still the case that your electric car is being charged by fossil-fuel-generated electricity. Yes, your Tesla runs on coal. But no matter where the energy comes from, using less of it is good. For a start, it saves money for the user, especially important if the user is you. If we are talking about electronics, Cadence is at the forefront of tools to analyze and optimize energy and power (which is just energy over time) The Bathtub Curve for Roman Emperors Do you know what a bathtub curve is? If not, then read my post, Automotive Reliability: The Bathtub Curve . But it applies to many things. New products often have some problems with how they were put together. Then the product is fine for years. Then things start to wear out. This applies to lots of products such as cars or electronics at both the level of things like smartphones but also at the level of individual transistors (yes, transistors wear out). And even human beings. It applies to Roman emperors too. It turns out 62% of Roman emperors suffered a violent death...and it follows a bathtub curve. Here's a paper on Statistical reliability analysis for a most dangerous occupation: Roman emperor . Popular culture associates the lives of Roman emperors with luxury, cruelty, and debauchery, sometimes rightfully so. One missing attribute in this list is, surprisingly, that this mighty office was most dangerous for its holder. Of the 69 rulers of the unified Roman Empire, from Augustus (d. 14 CE) to Theodosius (d. 395 CE), 62% suffered violent death ... This work adopts the statistical tools of survival data analysis to an unlikely population, Roman emperors, and it examines a particular event in their rule, not unlike the focus of reliability engineering, but instead of their time-to-failure, their time-to-violent-death. ... Nonparametric and parametric results show that: (i) emperors faced a significantly high risk of violent death in the first year of their rule, which is reminiscent of infant mortality in reliability engineering; (ii) their risk of violent death further increased after 12 years, which is reminiscent of wear-out period in reliability engineering; (iii) their failure rate displayed a bathtub-like curve, similar to that of a host of mechanical engineering items and electronic components. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
Jan 27, 2023
Training Insights – Webinar –: Solve Tricky SVA Problems with Jasper Visualize and WaveEdit: Recording Now Available
Are you experienced in using SVA? It’s been around for a long time, and it’s tempting to think there’s nothing new to learn. Have you ever come across situations where SVA can’t solve what appears to be a simple problem? What if you wanted to code an assertion that a signal rises at any time, but once it has risen, it stays high forever? What if you wanted to cover the same thing? Would the same property(s) work? What if you wanted to code an assumption that, eventually, no new requests are ever made on an input again? Are SVA local variables useful for anything? Do they do anything that you couldn’t code in another way? How would you know? Sure, you can refer to the SystemVerilog Language Reference Manual IEEE-1800, but how do you know your interpretation is correct? How would you know if the test case you created covers all scenarios? The good news is that you don’t need to! All these questions can be answered by the Jasper Visualize Interactive Debug Environment —with close to zero effort. Just like your own SVA questions. You’ll also have full confidence that your test case is correct per LRM and has been exhaustively checked using formal techniques. Watch the recording of the Webinar with Cadence´s Sr. Principal Education Application Engineer Mike Avery which demonstrates how Jasper Visualize and WaveEdit provide an easy-to-use vehicle, so you can get the answers to your own questions—quickly and easily. You can access the video by using your Cadence Support account (email ID and password) . If you don’t have a Cadence Support account, go to Registration Help or Register Now , and complete the requested information. Solve Tricky SVA Problems with Jasper Visualize and WaveEdit (Webinar) This video also includes some new features we think you’ll enjoy: Menu (TOC), CC (enabled by default), and Speed Rate. Another helpful feature is the Search function, available under the TOC icon. Search for any word contained in the complete audio transcript, you’ll see the word in context, make your selection, and jump to that part in the video. Want to learn more? We can recommend the following trainings: Jasper FormalFundamentals v2019 (Online) Jasper Formal Expert v2209 (Online) Please note that there are also Digital Badges available Jasper Formal Fundamentals v21.09 (Badge Exam) Jasper Formal Expert v22.09 (Badge Exam) We can also organize this trainings for you as an instructor-led training rather than live or as a blended training. Please reach out to us at Cadence Training if you are interested in this. You might also want to check out the Jasper University ! Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. Related Blogs: PACMAN and Using Jasper for Security Verification Jasper User Group 2022: Ziyad's SOTU Jasper C2RTL App for Datapath Verification Training Insights - Embracing Datapath Verification with Jasper C2RTL App
Jan 26, 2023
Mimi Is Creating a Sustainable Audio Experience
According to Mimi’s database of 2.25 million hearing tests, over 55% percent of the adult population have some form of hearing loss. In order to combat this growing global crisis, the Mimi team have created a flexible software kit to provide hearing, testing, and digital audio processing to personalize audio to every user’s unique listening. To lend a helping hand (and an ear) are Cadence’s Tensilica HiFi DSPs. What’s unique about Mimi's technology is that it doesn't change the sound or music by modifying the bass or signal. Rather, the team stays true to the artist's intention by simply compensating for parts of the auditory system that may not be functioning properly. They look at the holistic function of the auditory system and bring back information, not just volume. When it comes to sound personalization, the Mimi team has taken a hearing science-based approach, measuring every user’s unique hearing and adjusting the audio signal accordingly. Their hearing scientists and engineers have built easy-to-integrate solutions that their partners can add to their products to complement their own audio features. Additionally, when a user takes a hearing test, a unique Mimi hearing ID is created, which can be used across multiple audio devices, whether that’s a pair of headphones or a TV. For this cross-platform system, Mimi works with Tensilica HiFi DSPs to reach a lot of devices, as the HiFi DSP is already on a lot of platforms. Phillipp Skribanowitz, Mimi’s co-founder and CEO, said “We are building a whole suite of hearing well-being features and bringing that to more users through the very scalable platform that Cadence provides.” “Designed with Cadence” is a series of videos that showcases creative products and technologies that are accelerating industry innovation using Cadence tools and solutions. For more Designed with Cadence videos, check out the Cadence website and YouTube channel .
Jan 26, 2023


Chalk Talks Featuring Cadence

Faster, More Predictable Path to Multi-Chiplet Design Closure
The challenges for 3D IC design are greater than standard chip design - but they are not insurmountable. In this episode of Chalk Talk, Amelia Dalton chats with Vinay Patwardhan from Cadence Design Systems about the variety of challenges faced by 3D IC designers today and how Cadence’s integrated, high-capacity Integrity 3D IC Platform, with its 3D design planning and implementation cockpit, flow manager and co-design capabilities will not only help you with your next 3D IC design.
Mar 3, 2022
Enabling Digital Transformation in Electronic Design with Cadence Cloud
With increasing design sizes, complexity of advanced nodes, and faster time to market requirements - design teams are looking for scalability, simplicity, flexibility and agility. In today’s Chalk Talk, Amelia Dalton chats with Mahesh Turaga about the details of Cadence’s end to end cloud portfolio, how you can extend your on-prem environment with the push of a button with Cadence’s new hybrid cloud and Cadence’s Cloud solutions you can help you from design creation to systems design and more.
Mar 3, 2022
Machine-Learning Optimized Chip Design -- Cadence Design Systems
New applications and technology are driving demand for even more compute and functionality in the devices we use every day. System on chip (SoC) designs are quickly migrating to new process nodes, and rapidly growing in size and complexity. In this episode of Chalk Talk, Amelia Dalton chats with Rod Metcalfe about how machine learning combined with distributed computing offers new capabilities to automate and scale RTL to GDS chip implementation flows, enabling design teams to support more, and increasingly complex, SoC projects.
Oct 14, 2021
Cloud Computing for Electronic Design (Are We There Yet?)
When your project is at crunch time, a shortage of server capacity can bring your schedule to a crawl. But, the rest of the year, having a bunch of extra servers sitting around idle can be extremely expensive. Cloud-based EDA lets you have exactly the compute resources you need, when you need them. In this episode of Chalk Talk, Amelia Dalton chats with Craig Johnson of Cadence Design Systems about Cadence’s cloud-based EDA solutions.
May 8, 2020


Featured Content from Cadence

featured video
How to Harness the Massive Amounts of Design Data Generated with Every Project
Long gone are the days where engineers imported text-based reports into spreadsheets and sorted the columns to extract useful information. Introducing the Cadence Joint Enterprise Data and AI (JedAI) platform created from the ground up for EDA data such as waveforms, workflows, RTL netlists, and more. Using Cadence JedAI, engineering teams can visualize the data and trends and implement practical design strategies across the entire SoC design for improved productivity and quality of results.
Nov 11, 2022
featured video
Get Ready to Accelerate Productivity and Time to Market with Overnight Chip-Level Signoff Closure
Is design closure taking too long? Introducing the Cadence Certus Closure Solution, a revolutionary new tool that optimizes and delivers overnight design closure for both the chip and subsystem. Learn how the solution supports the largest chip design projects with unlimited capacity while substantially improving productivity by up to 10X.
Oct 21, 2022
featured video
Butterfly Network Puts 3D Ultrasound on a Chip with Cadence
About two-thirds of the world’s population lacks access to medical imaging, whether in developing nations or in first-world countries with underserved communities. Driven by a vision of improving the standard of healthcare around the world, Butterfly Network designs and makes the first 3D Ultrasound imaging system and whole-body imager that's small enough to be carried in your pocket.
Oct 19, 2022
featured video
Leverage Big Data and AI to Optimize Verification and Productivity Across an Entire SoC
Optimize your verification workload, boost coverage, and accelerate root cause analysis to reduce silicon bugs and accelerate your time to market with Cadence Verisium AI-Driven Verification. Learn how this generational shift from single-run, single-engine algorithms to algorithms leverages big data and AI across multiple runs of multiple engines throughout an entire SoC verification campaign.
Oct 17, 2022