posted by Bryon Moyer
This one is for those of you that are interested in language. Our industry tends to take a lot of liberties with language (largely, I think, because we’re engineering majors, not English majors, and we fired all the literate production people years ago when we learned how to use Word.)
So, for instance, we routinely eliminate the spaces and hyphens between numbers and units (did you know that “5mA” is wrong? It’s “5 mA”, or “5-mA” if modifying a noun, like “a 5-mA increase”). And we’re not quite sure what the plural of “die” is, trying all sorts of combinations but studiously avoiding the correct one, “dice.” (Someone told me, “Dice are for games.” What kind of rationale is that? Wafers are for dessert, but we can still make them out of silicon…)
Well, another interesting thing has come to my attention. And apparently, I’m late to the game. For as long as I’ve been in the business, there have been two kinds of graphic elements: photos and drawings or figures. The difference is that a photo shows the real thing (ignoring Photoshop), while a drawing or figure has been created manually by someone to illustrate something that a photo can’t illustrate (or for which no photo is available).
But I’ve been noticing something strange at conferences: drawings are routinely referred to now as “cartoons.” When did this happen? Who started it?
When I asked my non-engineering friends what “cartoons” are, they came up with the usual fare: the strips in the newspapers or editorial cartoons. And there are television cartoons – intended, usually, to be humorous. Yes, these are hand-drawn, but to the non-engineering world, not just any drawing can be a cartoon.
And that’s what bugs me about calling “drawings” “cartoons.” Because, at lease in standard American English, “cartoon” carries the notion of silly or, in the case of editorial cartoons, lampooning or satire. They’re specifically not something you take too seriously. A ”cartoon caricature” is not a good thing.
Just to make sure I wasn’t going totally crazy, I looked up the dictionary definition:
1. a sketch or drawing, usually humorous, as in a newspaper or periodical, symbolizing, satirizing, or caricaturing some action, subject, or person of popular interest.
2. comic strip.
3. animated cartoon.
4. Fine Arts. a full-scale design for a picture, ornamental motif or pattern, or the like, to be transferred to a fresco, tapestry, etc.
Why would we want to refer to serious engineering figures in such a way? Do any of you have any insights into when and why “drawings” became “cartoons”?
posted by Bryon Moyer
With semiconductors, we have this expectation that, at some future time, we’ll be able to integrate everything onto a single chip. Analog, digital, MEMS… you name it.
And, theoretically, we will be able to. In fact, we probably can now. Do we? Nope. And it’s even less likely as we keep moving forward.
Why? Because the most advanced wafers are freakin’ expensive. If there’s a bunch of your stuff that will work at 28 or 180 nm, you’re going to do that because it’s so much cheaper than, say, 14 nm. And if the different components have different yields, then you don’t want to be throwing away large chunks of expensive good silicon because one small area is bad. So you’ll use the cheapest possible process for each element, and then bring the individual tested chips together on an interposer of some sort – or maybe even stack 3D – and call it good.
Yield makes a big difference here. If you think about what InvenSense does with their gyroscopes, for example, they take a MEMS wafer and an ASIC waver and mate them face-to-face. That means that you may end up throwing away a good MEMS die if it happens to mate with a failing ASIC die – and vice versa. So the only way this works is if the yield is high enough to make such loss negligible. If that’s not the case, then you want to test and singulate the wafers independently and co-package only the good units.
So… that’s semiconductors, over there in that corner. Over here in this other corner, we have printed electronics. Whole different ballgame. And it brings with it the promise of printing entire systems in a single roll-to-roll printing pass.
Or… maybe not.
I had a conversation with Thinfilm at last November’s IDTechEx show. They make a wide variety of memories and other components printed on plastic. Their first generation was roll-to-roll memory for “consumables” – things that will be used and then discarded, like labels*. And it was printed on rolls 12” x 1 km.
The second generation, however, has more than just memory. There may be sensors and radios, such as are on their sensor labels.
But these are no longer printed in a single pass on a long roll. In fact, some of it isn’t even roll-to-roll. And some of the components, at present (like passives), aren’t even printed; they’re discrete.
Why not just do them all together? Same reason as we don’t with semiconductors: yield. If you commit everything together, then you may end up throwing away lots of good stuff due to a little bit of bad stuff. So their printed subsystems are manufactured and tested – in three different locations (Korea, Sweden, and San Jose), and then these are mounted on a plastic substrate with printed connections. This last step is a sheet process, not roll-to-roll.
Might this eventually evolve, as yields improve, to higher levels of integration? Yes, they say. But there’s always going to be a leading edge, and leading edges tend not to yield as well as established processes. So any product incorporating aggressive, novel technology is likely to be pieced together.
In other words, we’re unlikely ever to be working solely on a fully-integrated roll-to-roll basis.
You can find out more about Thinfilm here.
Meanwhile, there’s whole different reason for resisting integration that we’ll talk about in a few days…
*The military also uses the word “consumable” as a quaint euphemism for ammunition – hardware that will be used once and then be, shall we say, taken out of action. And “smart” versions of such hardware will have electronics on them.
(Image courtesy Thinfilm.)
posted by Dick Selwood
A couple of months ago, I wrote about ISO 26262 and the changes that this was forcing on the chip development process. (Spaghetti versus ISO 26262 http://www.eejournal.com/archives/articles/20141125-iso26262).
Many of the chips used in vehicles use ARM processor cores, particularly the Cortex-R5, and today ARM has announced that it is making available a safety document set that provide developers with the information needed to demonstrate that their products are suitable for use in systems that meet the highest level (ASIL-D) of safety.
To do this, ARM went back over the entire development process, from initial specification through to final verification. This has been time consuming but as well as providing the material for the Cortex-R5, it confirmed that the development process was robust. It also means that the procedures are in place to produce the safety document sets as part of the normal development process for future cores.
The documentation can also be used for the core safety standard, IEC 61508 and other industry specific standards, such as IEC 62304 for medical products and DO-178 for defence.
As well as hardware, ARM is also supporting software. The ARM compiler is now certified by TUV-SUD as being appropriate for developing software for systems up to ISO 26262 ASIL-D and IEC 61508 SIL-3. Also within the R5, and other cores are functions like memory protection designed for safer software.
During the briefing Chris Turner of ARM came up with something that I hadn't thought of. One of the consequences of ISO 26262 is that there is now a common process and language that runs through the automotive industry, from the manufacturers like Audi and Mercedes, down to the lowest level of suppliers – something that has never existed. This can only be a good thing.