posted by Bryon Moyer
If you want to minimize the power consumption of your system, you can’t wait until you have your design reduced to gates. Yeah, you can do some things there to help, but the big wins come earlier, at the architectural or RTL levels.
But how do you know at that point how much current your design will draw? In particular, dynamic current? Past approaches have used average activity rates or some such general number to give a squint-your-eye approximation, but it’s hard to get that number right, and it’s harder to get it right mode-by-mode and block-by-block.
What would be nice would be actual activity data. That would mean accessing every node in the design so that all nodes can have their power contributions included. FPGA prototypes won’t work, because all the nodes aren’t externally visible. And, as Mentor notes, even FPGA-based emulators, like Synopsys’ ZeBu systems, wouldn’t work because their full-visibility approach relies on JTAG to stream out the internals, which would be extraordinarily slow to do for every node on every cycle. (It should be noted that Mentor and Synopsys have a history of not seeing eye to eye on emulation patent claims…)
Mentor says that their Veloce emulators can run a suite of tests at speed, but there’s still an issue: capturing the results for use in a power analysis tool. The traditional approach to this would be to capture an FSDB (fast signal database) file and then use that file as input for power analysis. But that would be a ginormous file, and the writing and reading would become the rate-limiting step (partly because the data is written by signal but read by time).
Alternatives are to use VCD (still too slow) or SAIF (switching activity interchange format). SAIF gives only average activity information. Greater detail is available by providing more structural information ahead of time via a “Forward SAIF” file, but Mentor says that building this file requires detailed knowledge of the entire design and can be time consuming. So these traditional approaches tend to be used only for limited simulations on the order of tens of 1000s of cycles or so.
Mentor’s Veloce Activity Plot helps reduce the scope of data required by quickly showing where activity is greatest over time (without deep detail). You can then isolate those areas and have Veloce rerun those critical periods in great detail so that the power can be determined. But this is still more than an FSDB or VCD approach can handle gracefully.
So Mentor did something rather unusual: they fraternized with the enemy. Cautiously. In general, they compete with Ansys, but they don’t compete with Ansys’s PowerArtist power analysis tool. They wanted to be able to deliver data to that tool without divulging the internals of how that data is stored and structured. So they obscured all of that detail by providing an API that PowerArtist can use to capture data on the fly.
In other words, no more saving a file as an intermediate step: the circuit activity is observed on the emulator and passed in real time to PowerArtist, where power results are calculated.
(Image courtesy Mentor Graphics)
Mentor calls this tool the Veloce Power Application. It’s not just a Mentor utility that overlays an existing PowerArtist installation (sold separately); it’s a full integration. So when you get the Mentor tool, you’re also getting the PowerArtist tool buried inside.
You can get more info in their announcement.
posted by Jim Turley
One of the many things to come out of Apple's recent Worldwide Developers' Conference (WWDC) last week was an almost offhand discussion of something called Bitcode. It's an intermediate software format, neither source code nor binary code. And its existence suggests that Apple is getting ready to change its microprocessor architecture. Otherwise, what's the point?
Bitcode is not the first or only time that software companies have used intermediate formats to make apps hardware-independant. It's not even the first time Apple has done it. But it does suggest that the Cupertino firm is about to make a change to its CPU architecture, its operating systems, or both. Rumors have floated for years that Apple might switch its Macs from x86 chips to ARM chips, and Bitcode would certainly ease that transition. It could also allow Apple to harmonize the operating systems on Mac and iDevices by allowing both to run the same apps (or at least, to use similar APIs). Apple is one of only a handful of companies to hold an "architectural license" to the ARM microprocessor architecture, meaning it can design its own ARM chps from scratch, not just incorporate ARM's existing CPU cores. That could allow Apple to create special ARM-based chips with special accelerators, coprocessorss, or other unique features.
Because nearly all Apple software is distributed through the App Store, and not via CD-ROM, third-party downloads, or "side loading," Apple is in a unique position to modify those apps before they're installed. An app purchased for an iPad could be modified to suit its characteristics versus, say, a Mac or a Watch. If future iPads use a different processor than current models, the app could be tweaked again, all at the time of purchase. Let the rumor mill grind on.
posted by Bryon Moyer
Baking is more of a science than other types of cooking. Cakes can be particularly tricky to get just right when it comes to structure and texture. Skilled bakers can assemble an amazing cake from bare ingredients, but your average household cook or baker may find that more difficult. Instead, they can purchase cake mixes, with all the ingredients included in the right proportions. They just add a few key liquid ingredients, mix, bake, and decorate. Much easier and faster, and less chance of error.
You may remember Ayla Networks from coverage almost exactly two years ago. They had the first Internet of Things (IoT) platform that I was aware of, with end-to-end provisioning of everything you need to build an IoT network.
Well, they’ve just recently come back with an addition to their offering, intended to make it easier to build phone apps.
Up to now, they’ve provided a set of libraries that programmers can access for building apps. This works, but the challenge, they say, is that it requires relatively sophisticated programmers that know their way around the bits and bobs involved in stitching together a working, robust phone app. The cost, including hiring a contract programmer, can be on the order of $300,000 when all is said and done.
For large companies with internal development teams, these libraries can be manageable. And it certainly permits the highest level of differentiation. But for smaller companies, or for teams trying to get something out there as soon as possible, it can be a barrier.
So, in addition to these libraries, Ayla has announced their Agile Mobile Application Platform. (Yes, another platform.) What they’ve done in this case is to pre-build application modules so that much of the app work is done ahead of time. By Ayla’s reckoning, they’re about 85% complete. You still need to assemble them and “skin” them, as it were, to provide the look, feel, and branding you’re after.
But the idea is that you can do this with a less sophisticated programmer and, overall, cut your investment in half. And you can get a working app in only a couple of days.
The pre-built capabilities include:
- User Registration
- Sign in
- Password recovery
- Zigbee Node Setup
- Apple HomeKitSetup (iOS)
- Device Cloud Registration
- Property Manipulation
- Schedule Creation
- Schedule Manipulation
- Timer Setup
- Push Notification Setup
- Account Information Editing
- Device Removal
The cost involves an up-front fee to license the software and then ongoing maintenance fees. There is no per-unit royalty or run-time license (if you stop paying maintenance, your devices won’t all stop running).
You can find more in their announcement.