feature article
Subscribe Now

Machine-2-Machine Conference Call

Technology is supposed to make us more efficient, but one of the great time-wasters of all time, the meeting, has yet to disappear. OK, ok, I know… Meetings can be useful for communication, and communication is increasingly important as people get busier and busier and have no time to communicate. So a well-planned, well-executed, to-the-point meeting can be a good thing.

What has changed is the need for people to be physically present: conference calls have taken the place of face-to-face meetings in many areas, facilitating communication without the overhead of having to go somewhere else. Those meetings typically consist of an organizer that sets things up and then participants that call in to participate.

Picture, then, such calls with two additional features (ones that aren’t outside the realm of possibility): rather than having participants call in, the system (at the organizer’s direction) calls out to bring the participants in. And, a bit tougher, picture translators for those lines going to foreign countries so that those participants can speak in their own native tongue, relying on the translator to bridge the gap.

Finally, imagine that the organizer and participants aren’t, in fact, people, but are servers executing a variety of different design and simulation tools in a variety of different disciplines as necessary for implementing a system design involving widely disparate domains. Like, say, an electric airplane brake subsystem, which requires embedded software executing on its hardware platform, a motor operating under the influence of electromagnetic fields, and the rotational mechanics of the wheel, all of which need to work together.

And so we arrive at the world of Mentor’s SystemVision conneXion, or SVX. While it shares the SystemVision name with Mentor’s system modeling and simulation tool, SVX doesn’t do any simulation itself; it simply facilitates full-system simulation by bringing different simulators and models together in virtual collaboration.

To be clear, this is distinct from our recent look at SoftMEMS. In that case, we were discussing the creation of multi-discipline models using AMS languages. This isn’t about the models; it’s about communication.

The different pieces of such a complex, multi-natured system typically reside with different people that are likely in different buildings or even different countries. Some team members might be from different companies, since many large firms will subcontract out the design of different components of the final product. Traditionally, these different individuals or groups have worked in isolation from each other, trying as best as possible to anticipate what will happen when things are all integrated, but not knowing how that will work out until they finally bring their almost-finished bits together and pray that they will play nicely with each other.

Which sometimes happens and all too often doesn’t.

An academic solution might be to develop some giant multi-physical simulator that can model anything and then have the designers assemble an entire virtual incarnation of the system to give it a full shakedown prior to physical implementation.

Never gonna happen.

The different groups think differently, speak different languages, use different tools, and are somewhat suspicious of their extra-domain counterparts, who seem woefully ignorant of the rituals and incantations so important in their corner of the world. There will be silos. Which doesn’t have to be bad if the silos aren’t completely impenetrable. After all, what’s efficient for mechanics isn’t necessarily efficient for fluids or digital logic, so why force a single solution?

So, given that the silos remain, how to avoid surprises when wheel meets brake motor meets control subsystem? Modeling and simulation are still the answer, but within the context of these silos. One approach might be to have models that contain the details for one’s own domain, but that “stub out” the interfaces to other domains, using test data instead. While this is doable, the test data is likely to get stale, given that folks in the other domains are also busily modifying their designs.

What would be most accurate would be if those stubbed-out portions could actually be simulated using the most current version of the model being generated by its designers. But these simulations are done by very different tools, and those tools haven’t been built to talk to each other. This is where SVX comes in to create something like a conference call between the tools.

Let’s say the digital designer wants to run a simulation that includes real control and response from a mechanical subsystem and a hydraulic subsystem. We need a conference call between the digital simulation, the mechanical simulation, and the fluid dynamics simulation. SVX acts as the conference host, connecting between the various participants and keeping track of the data passing back and forth.

The digital simulation, in this particular example, acts as the conference organizer and initiates the call; SVX connects and starts the other two simulators. The digital simulator can then, for example, send digital control signals to the other components; those commands are translated into appropriate formats to be simulated, and their responses are shipped back to the digital simulator.

The typical way such schemes are approached is simply to send data from one place to the other under the assumption that “message sent means message received.” Which is an expectation that is as unrealistic between machines as it is between humans. Instead, SVX uses a publish-and-subscribe model. Data is posted; that data may be required by more than one consumer, and SVX monitors consumption, ensuring that the data remains available until all consumers have been able to grab what they need. In fact, when one side publishes data, it is then blocked until all consumers have completed their tasks, ensuring that all of the pieces remain in sync.

None of the different simulators knows anything about its counterparts elsewhere; all it knows is that it can publish data and get results. Format translation and paradigm differences become non-issues because each simulator persists in its worldview. There’s no need to rationalize all the worldviews into one.

Amongst the models that can be interlinked in this fashion are AMS models (via SystemVision, of course), Simulink models, and Labview models (the latter being particularly popular with engineers developing tests in advance of system availability). There’s also a C/C++ interface for bringing software into the picture as well. And, of course, Spice and VHDL models are supported.

With respect to mechanical and fluid simulation, it’s theoretically possible to hook in finite-element analysis and computational fluid dynamics simulators, but, in practice, they take far too long to execute. More typically, those tools would be used to create VHDL-AMS models, which are then simulated much more quickly and can act as a stand-in for the more detailed simulator.

Using this approach, any of the participants can initiate a conference call at any time, and the different simulators can “collaborate” and share information as needed, remaining comfortably ensconced within their silos and continuing on with their work when the call is over.

9 thoughts on “Machine-2-Machine Conference Call”

  1. Pingback: friv 1
  2. Pingback: agen bola terbesar
  3. Pingback: roof repair
  4. Pingback: must watch
  5. Pingback: iraqi coehuman

Leave a Reply

featured blogs
Dec 1, 2020
If you'€™d asked me at the beginning of 2020 as to the chances of my replicating an 1820 Welsh dresser, I would have said '€œzero,'€ which just goes to show how little I know....
Dec 1, 2020
More package designers these days, with the increasing component counts and more complicated electrical constraints, are shifting to using a front-end schematic capture tool. As with IC and PCB... [[ Click on the title to access the full blog on the Cadence Community site. ]...
Dec 1, 2020
UCLA’s Maxx Tepper gives us a brief overview of the Ocean High-Throughput processor to be used in the upgrade of the real-time event selection system of the CMS experiment at the CERN LHC (Large Hadron Collider). The board incorporates Samtec FireFly'„¢ optical cable ...
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...

featured video

Product Update: Broad Portfolio of DesignWare IP for Mobile SoCs

Sponsored by Synopsys

Get the latest update on DesignWare IP® for mobile SoCs, including MIPI C-PHY/D-PHY, USB 3.1, and UFS, which provide the necessary throughput, bandwidth, and efficiency for today’s advanced mobile SoCs.

Click here for more information about DesignWare IP for 5G Mobile

featured paper

Reducing Radiated EMI

Sponsored by Maxim Integrated

This application note explains how to reduce the radiated EMI emission in the MAX38643 nanopower buck converter. It also explains the sources of EMI noise, and provides a few simple methods to reduce the radiated EMI and make the MAX38643 buck converter compliant to the CISPR32 standard Class B limit.

Click here to download the whitepaper

Featured Chalk Talk

Accelerate the Integration of Power Conversion with microBUCK® and microBRICK™

Sponsored by Mouser Electronics and Vishay

In the world of power conversion, multi-chip packaging, thermal performance, and power density can make all of the difference in the success of your next design. In this episode of Chalk Talk, Amelia Dalton chats with Raymond Jiang about the trends and challenges in power delivery and how you can leverage the unique combination of discrete MOSFET design, IC expertise, and packaging capability of Vishay’s microBRICK™and microBUCK® integrated voltage regulators.

Click here for more information about Vishay microBUCK® and microBRICK™ DC/DC Regulators