editor's blog
Subscribe Now

The Clouds Converge

We’ve been watching the cloud computing space, and Synopsys has been playing a visible role in exploring the public cloud for compute resource elasticity during simulation. We took a brief look at an attempt they made to demo the cloud at DAC, which was sabotaged by one of their own hard drives.

So yesterday I got a make-up session from Alex Seibulescu, a Synopsys senior staff engineer. It was also useful in that we weren’t under the same time pressure as a quick DAC presentation might have, so, in addition to showing the simulation run, he was able to delve a bit more into the management of the cloud resources.

The demo consisted of a script executing on a Synopsys computer. In fact, one of their security features involves establishing an IP address from which the cloud can be manipulated. This was Synopsys’s external IP address, so any attempts to run the script from some other IP address would be rejected.

The script is, for the most part, just like any other script. It involves secure shell (SSH) and secure copy (SCP) commands as well as Synopsys’s SC2 command.

The SC2 command is the catch-all command that wraps all of the underlying activities required to manage Amazon. It’s very high-level, making all of the Amazon details opaque. There are four options you can use with the SC2 command: create; query; modify; and delete. These provide log results that can be piped into a file; that file can then be searched for information. For instance, when “create” is used to build a new cluster, the resulting log file can be grepped to extract the cluster ID number, which is required for further commands for that cluster.

His script created a cluster, got the regression suite going, and then did some dynamic resource balancing on the fly. The script checked the progress at a time 50% through when the suite needed to be complete. If it was less than 50% (which, of course, it was), then more cores were added.

The concept of a “cluster” is a master node, a license server, and then zero to some large number (hundreds; theoretically, limited only by what Amazon owns) of worker nodes. Each core on a worker node is considered an instance (a CCI); when you reserve cores, you reserve an entire machine (the machines are not “multi-tenanted” so you’re not sharing with anyone else). The machines have 8 cores, so you have to allocate CCIs in multiples of 8.

For the example, he started with 8 CCIs and then added 32 part way through to finish faster. That’s a simplistic approach: because this can all be done programmatically, you could, for example, figure out how far behind schedule you were to figure out the right number of CCIs to add (rather than just adding a fixed 32).

Each computer is a Linux SMP box, with jobs allocated by a load-sharing program. Synopsys provides sg (Sun GridEngine), which is free, or, if you have LSF licenses, you can use LSF as well.

Once the job completed, the script tarred up the results and downloaded them; the cluster was then destroyed. If the cluster were going to be used again soon, it could be left up, with all worker nodes de-allocated. That would allow all of the configuration, design, and results data to remain in the cluster. Once the cluster is destroyed, all vestiges of the session disappear (meaning it’s critical to make sure you’ve downloaded your results before destroying the cluster).

As the job was running, Alex was able to go to a website to check the progress of the run. There was lots of information – in fact, more than really needed by a typical user (which is useful at this stage for any debug needs).

We also discussed versioning. There’s a version of VCS that a user will get by default; if a different version is needed, the customer can work with Synopsys to make that version available. Synopsys also doesn’t upgrade versions on the fly: any upgrades would typically be initiated by the customer, most likely after that customer has upgraded their own installations to a new version. So there’s no chance that a version would change in the middle of a project.

Leave a Reply

featured blogs
Jun 14, 2021
By John Ferguson, Omar ElSewefy, Nermeen Hossam, Basma Serry We're all fascinated by light. Light… The post Shining a light on silicon photonics verification appeared first on Design with Calibre....
Jun 14, 2021
As a Southern California native, learning to surf is a must. Traveling elsewhere and telling people you’re from California without experiencing surfing is somewhat a surprise to most people. So, I have decided to take up surfing. It takes more practice than most people ...
Jun 14, 2021
The Cryptographers' Panel was moderated by RSA's Zulfikar Ramzan, and featured Ron Rivest (the R of RSA), Adi Shamir (the S of RSA), Ross Anderson (professor of security engineering at... [[ Click on the title to access the full blog on the Cadence Community site. ...
Jun 10, 2021
Data & analytics have a massive impact on the chip design process; we explore how fast/precise chip data analytics solutions improve IC design quality & yield. The post The Importance of Chip Manufacturing & Test Data Analytics in the Semiconductor Industry ap...

featured video

Kyocera Super Resolution Printer with ARC EV Vision IP

Sponsored by Synopsys

See the amazing image processing features that Kyocera’s TASKalfa 3554ci brings to their customers.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

Featured Paper

An FPGA-Based Solution for a Graph Neural Network Accelerator

Sponsored by Achronix

Graph Neural Networks (GNN) drive high demand for compute and memory performance and a software only based implementation of a GNN does not meet performance targets. As a result, there is an urgent need for hardware-based GNN acceleration. While traditional convolutional neural network (CNN) hardware acceleration has many solutions, the hardware acceleration of GNN has not been fully discussed and researched. This white paper reviews the latest GNN algorithms, the current status of acceleration technology research, and discusses FPGA-based GNN acceleration technology.

Click to read more

featured chalk talk

RF Interconnect for Automotive Applications

Sponsored by Mouser Electronics and Amphenol RF

Modern and future automotive systems will put enormous demands on RF. We need reliable, high-bandwidth, low-latency, secure wireless connections between cars and infrastructure, from car to car, and within systems on each car. In this episode of Chalk Talk, Amelia Dalton chats with Owen Barthelmes and Kelly Freeman of Amphenol RF to talk about interconnects for these new, challenging automotive RF systems.

Click here for more information