posted by Bryon Moyer
More and more electronics are going into places where they could cause real damage if they don’t work right. Things like airplanes and weapons and, in particular, the systems that control them. That goes for hardware and software.
So there are elaborate standards controlling how things have to be done in order to pass muster for such systems. DO-178, DO-278, and DO-254 are only the most visible of these. The problem is that the standards don’t actually tell you what has to be done. They outline a broad process for certification, but exactly what is supposed to happen relies on a key individual: the “designated engineering representative,” or DER.
If you ask, in general, how you get a system certified, the answer is, “It depends.” And one of the things it depends on is the DER. You work with the DER to decide what you need to do for your system to be certified. And just because you did a particular set of things with one DER for one system doesn’t mean you can simply replicate that process with a different DER on another system. If the other DER has different ideas about how things should be done, then you have to go in that direction for the new project.
I (thankfully) don’t live in that particular world, but that’s got to be completely frustrating.
LDRA has offered up a Compliance Management System to help with this. It’s a certification process based on a particular individual, Todd White’s, 30 years of experience as a DER. It incorporates a system of checklists, matrices, and document templates intended to speed the certification process.
It works hand in hand with their certification consulting services, which are probably helpful to ensuring that this works most seamlessly. Using a different DER would, presumably, run the risk of that DER wanting something different. You would think, if these are truly proven elements for certification, that any reasonable DER would be happy to include them into a certification plan – unless they have their own system and insist on doing it their way.
So there’s still the possibility of some “it depends” in the mix, but the goal appears to be to remove some of it.
You can find out more in their recent release.
posted by Bryon Moyer
The smarter systems get, the more they are run by some kind of processor running code. So system functions that might have been controlled by circuitry and logic in the past are now turned on and off based on instructions in the system control code. So the power consumption of your system can depend highly on the code that’s running.
This is true even for the processor itself, whose power can depend on how your code runs. IAR has already had a solution for that, called I-scope, that you could use with their I-jet debugging pod to figure out what code was executing when some power event happened.
Well they’ve just extended that capability beyond the processor. You can now, theoretically, probe any points of interest on the board to figure out where power is going and then correlate that back to the code that’s executing. I say “theoretically” because you can’t simply probe a random node; you have to provision the board with a low-value shunt resistor for each power line you want to be able to probe. This provides the ΔV that the probe can measure to develop a power profile.
You can learn more by checking their release.
posted by Bryon Moyer
Phones have a ton of work to do – mostly things unrelated to being a phone. And we notice when things bog down and work too slowly, and we don’t say nice things about such phones.
So phone manufacturers put accelerators in the phones to do select compute-intensive things more quickly. In fact, it’s a double-bonus: the chore gets done faster and the main processor can do something else at the same time.
The only trick is to get the system to use such offloads when appropriate. And, at this time, Android has no way of doing that. As far as it’s concerned, everything is supposed to be done by the CPU. So it schedules everything on the CPU, and offloads can lay idle. Which is not good for companies like CEVA that provide the means of offloading things. It’s hard to justify the silicon cost of an offload that won’t be used.
So CEVA has just announced an Android Multimedia Framework (or AMF… no, not that AMF…) to provide some plumbing to allow direct access to low-level DSPs and offloads – in particular, for multimedia processing. By leveraging the OpenMAX API, any OpenMAX invocations essentially get trapped and routed over to hardware that can execute it more efficiently than the CPU can. Those offloads may be on the main SoC or on a separate chip. Of course, for this to work, those offloads have to be implemented on a CEVA DSP.
The structure of the system has the CPU managed by Android and the DSPs managed separately by an RTOS (doesn’t matter which one). OpenMAX calls are sent by a host link driver on the CPU side to a link driver on the RTOS side via mailboxes in shared memory. Multiple calls can be “tunneled” together for more efficient use of the CPU’s time.
Because a standard API is used, rather than a proprietary one, then, whenever a future Android version supports offloading, code written for AMF should still work. Android 5.0 may provide this by the end of the year, which would make AMF a nine-month stopgap. But CEVA points out a few “ifs”: that would be true if the release happens on time and if it includes multimedia offload support and if it supports off-chip offloading and if a given phone can be upgraded to 5.0.
You can get more info in their release.