posted by Kevin Morris
Multiple financial news sources are reporting today that talks between Intel and Altera have ended. According to multiple sources, Altera declined Intel's offer and the two companies have ended negotiations.
For the rest of the FPGA industry, this may be a bit of a disappointment. Other FPGA vendors we've talked with were optimistic that an Intel takeover of Altera would be good for the other FPGA players - altering and narrowing Altera's focus, creating uncertainty among Altera's existing customers, and disrupting the company's existing development projects. Now, apparently, they'll have to continue to compete with the Altera they already know.
This does not change our view that FPGAs and heterogeneous/hybrid processing pairing FPGA fabric with conventional processors will soon revolutionize the data center in terms of performance per watt. It does mean that the competition to win that emerging market will be more open and exciting.
We will now resume our regularly-scheduled programming.
posted by Bryon Moyer
A big change is coming to a power supply near you.
At least, that’s what Semitrex is promising with their new TRONIUM family of power supply systems on a chip (PSSoCs). They’re doing some things differently, resulting in lower-power – and, in particular, lower-“vampire”-power – bricks.
The first difference is that they’re going capacitive; there’s no transformer. How do they handle breakdown issues? By using a cascade of smaller capacitors (which they say also reduces electromagnetic interference (EMI) ). They have an array, and they pick the caps to suit the necessary breakdown.
They haven’t been able to completely eliminate inductors, since the high-current caps they need require non-standard processing (apparently, the breakdown voltage is tied to RDSon, and they need to decouple that). They’re working with fabs on that one. So the caps are used for “pre-regulation.”
The second big change is how they do the sensing needed for control. These switching converters use pulse-width modulation to control the in/out voltage ratio, and – for no good reason, according to Semitrex – the sensing required for that control loop has traditionally been on the secondary side.
These more traditional units used transformers for isolation, meaning the primary and secondary side were mutually isolated. The control is on the primary side, so you need a way to get the secondary sense signal back across the transformer without an electrical connection; this was typically done with an opto-isolator.
Instead, Semitrex is doing the sensing on the primary side. This eliminates several components from the bill of materials. (Although, if there’s no transformer, there isn’t isolation … some of these details and diagrams are pending their filing of patents, so not all is clear.)
Finally, they’ve integrated most everything into a single module, reducing the number of external components required. The algorithms are built in (they have a state machine for low-level control and a microprocessor for higher-level algorithms). This saves cost, hassle, and lowers power.
They’re targeting supplies in the 10-100-W range, 50-500 mA. They claim less than 1 mW of standby power, and the chip can respond to a load in 3-5 ns. You can think of the standby power as the vampire power when it’s sitting around doing nothing useful. For comparison, Semitrex says that today’s idle power supplies can burn more than 100 mW (100 mW is apparently an upcoming US Dept. of Energy efficiency standard that they say many supplies today cannot meet).
You can read more in their announcement.
PS They tried to put some pronunciation help in the press release, but I have to confess that it confused me more than helping. They use stress marks, but indicate primary stress on the first and second syllables (you can’t have primary stress on more than one syllable), and then they show the middle syllable in caps – another way to indicate primary stress. They also have a double-stress mark on the last syllable… that would normally mean extra stress, which can’t be right. If I ignore the stress marks, then it’s “tron-EE-um.” If that’s the case, I’m not sure that will stick – my guess is that, without hearing it, everyone is going to say “TROH-nee-um.” (Or, more formally, “’tron-i-um” or “tron’-i-um”, depending on whether you pre- or-post-mark stress… seems both ways are done…) But I digress…
posted by Bryon Moyer
So you’re working on a design… Are you sure you’re building what was intended? Yes, you’re building what they asked for… or, at least, what you think they asked for, but is that what they wanted?
Requirements can be dicey; they’re based on natural language, which, as we know all too well, is subject to interpretation. According to Argosim, many companies have institutionalized styles – sentence templates, for instance – to ensure consistent, clear, unambiguous articulation of requirements. That’s not a guarantee of clarity, but it certainly helps.
What it doesn’t necessarily help with, however, is the following question: do you know if, out of all the requirements, some of them conflict or are mutually inconsistent? If one requirement is a dense material to protect against radiation and another requirement is that the material has to float, those two might not work together.
There has, until recently, been no way to formally check requirements for consistency and overall requirements correctness. Those two requirements above (dense + floating) are relatively vague – you wouldn’t be able to test them without the ability to extract the semantics and then simulate the physics. But many requirements are functional, and they can be expressed in a more formal manner that can then be tested.
This might sound like a nice-to-have for many electronic products. We see too many cheap consumer goods that clearly haven’t been well thought out or possibly don’t even have all of their features working properly. This would be great for that, but such products typically have cost and schedule requirements that don’t really allow for a more thoughtful process.
Safety-critical equipment, however, is a completely different matter, having strict requirements for requirements traceability. And, while some product requirements will always have a level of vagueness, functional requirements in such systems explicitly must be verifiable. This means that they should be specific enough to have their mutual consistency and other properties tested.
This is what Argosim has introduced in their STIMULUS offering. It’s a mechanism for specifying requirements, testing them for correctness and consistency, and then generating tests from them that can be used in future validation testing.
At present, the workflow is somewhat disconnected from existing flows; ideally, requirements would be specified using STIMULUS – that’s Argosim’s long-term vision. For the moment, it’s more of a collaborative process.
First, a requirements engineer will create natural-language requirements in the same way as is done today. He then hands them off to a verification engineer, who will create a model in STIMULUS using a formal language that can then be simulated. Both requirements and assumptions are included in the description. Through simulation, problems might be identified – and the original requirements engineer is then consulted for discussion and correction.
If no issues are identified, then the verification engineer can generate tests that will be used downstream to close the loop and ensure that the implementation of the requirements matched the intent. If tests aren’t generated, or if the test set is incomplete, then this is still something of an open-loop process.
You can find out more about STIMULUS in their announcement.