posted by Bryon Moyer
Location services used to mean one thing: applications that leveraged GPS and other global navigation satellite systems (GNSS) to fix your location and then… do stuff with that information. Of course, GPS isn’t reliable indoors, so there were holes in the system, but, for its time, it was pretty spiffy.
Meanwhile, in a separate corner of the technology world, MEMS hit high gear, and inertial sensors allowed some indoor navigation (better with expensive chips; so-so with commercial grade; greatly enhanced by good sensor fusion). As we’ve seen, the two work together, each standing in where the other was weak, and bolstered by the use of indoor networks like WiFi and Bluetooth as further triangulation tools.
But, despite their mutual affinity, GNSS and inertial systems remained distinct. One talks to satellites; the other uses MEMS. It was up to systems integrators to bring them together.
Well, it looks like that’s changed. Broadcom has announced a combo GNSS/sensor hub chip. Yes, it’s not just an inertial system; it’s a more generic sensor hub. But the obvious application is to plug in some accelerometers and gyros, perhaps augmented by a magnetometer, and get them dancing with the GPS.
Of course, part of the story is power reduction, afforded by the microcontroller in the sensor hub as it offloads a phone application processor, but that’s the case for any hub. What’s different here is that GNSS becomes, in essence, just another sensor. Which is kind of what it is, right? A satellite sensor?
They can also do indoor network triangulation… Think of it as a WiFi (et al) sensor.
You can get more details in their announcement.
Image courtesy Broadcom
posted by Bryon Moyer
IEEE published a sensor-related standard recently. And, depending on what headline or report you read, you may end up with a wide variety of conclusions as to what it’s all about. The original press release linked it to an eHealth memorandum of understanding (MOU) between IEEE-SA and the MEMS Industry Group (MIG); NIST issued a press release regarding their participation; and various stories described it as a “sensor hub” standard.
All of which surprised me, because I was only aware of one standard effort underway, and it was none of those things. Well, not directly, anyway. Of course… things can happen without my knowing about them, so I scrambled to see what I had missed.
Turns out I hadn’t missed anything. This P2700 standard is the very same one we overviewed in May of 2013. Which is nine months before the eHealth MOU. It’s about sensor datasheet parameters. It’s also part of a process in which NIST was indeed involved, although the specific effort was spearheaded by a number of companies (as described in a yet earlier overview of MEMS standards efforts); NIST was in the list of acknowledgments, not the list of contributors. It is fair to say that some of the discussion probably got a start in yet another NIST effort regarding MEMS testing that predated all of this.
But the bottom line is that the main motivator was the fact that different sensor manufacturers were defining their datasheet parameters differently, making it impossible to compare one sensor’s performance to that of another. This is a fundamental driver of standards, and has been for a long time.
Here are the purpose and scope of the standard, as included in the draft submitted to IEEE:
1.1 Purpose of Document
This document presents a standard methodology for defining sensor performance parameters with the intent to ease system integration burden and accelerate TTM. Here within, a minimum set of performance parameters are defined with required units, conditions and distributions for each sensor. Note that these performance parameters shall be included with all other industry accepted performance parameters.
1.2 Document Scope
This document is intended to drive the sensor industry toward common nomenclature and practices as cooperatively requested by mobile platform architects. It clearly outlines a common framework for sensor performance specification terminology, units, conditions and limits. The intent is that this is a living document, scalable through future revisions to expand as new sensors are adopted by the platforms. The intended audience of this document is sensor vendors, ISVs, platform providers and OEMs.
Can the sensors affected by this be used in eHealth? Yes, of course. And all kinds of other things. It’s not specifically eHealth-related.
Was NIST involved? Yes, as was MIG, although more with coordination than with content.
Will the sensors involved in this standard be connected to sensor hubs? Undoubtedly. Will the sensor hub code be simplified, as claimed in some stories? That’s actually not clear to me. Sensor hubs need to talk to sensors, extract their values, and compute with them. There’s nothing in the standard that deals with how that’s done.
I suppose that, when doing sensor fusion, some adjustment algorithms might be needed to adapt to different sensors if the readings from different manufacturers mean different things. Then again, this standard is about what’s in the datasheet and the testing conditions for different parameters; it’s not clear if that affects the actual readings. I don’t think any chips are changing as a result of the standard.
One other quick note regarding IEEE. There are actually 2 flavors of IEEE, as we discussed a few years back. There’s IEEE-SA (“Standards Association”) and IEEE-ISTO (“Industry Standards and Technical Organization”). One is more “independent,” with a thorough vetting process; the other allows companies sponsoring efforts to keep some control over the process. It had been a while since I thought about that, and so I wanted to be sure about which IEEE this standard had gone through.
IEEE-SA is the traditional arm of IEEE. So this standard has received the nod from the more exacting side of IEEE. And it did so in relatively short time (for IEEE), with little in the way of change to the original submitted draft, as told to me by IEEE-SA’s Director of Global Business Strategy and Intelligence, Alpesh Shah.
All of which means that the team that put the draft together did yeoman’s work.
posted by Bryon Moyer
InGaAs is one of the new wunderkind semiconductors, favored for high-electron-mobility transistors (HEMTs) and for optical designs (more about that in a future post). But, as with other more exotic materials, it isn’t silicon, and therefore it doesn’t benefit from silicon’s economics.
The problem is the lattice: to grow single-crystal stress-free InGaAs, you have to use a substrate with a similar lattice (you have some flexibility by adjusting the quantity of indium, which tweaks the lattice). Three III/V substrates available are GaAs, InAs, and InP, the latter of which is more typical. None of them is silicon.
Let’s say you want a semiconductor-over-insulator configuration using InGaAs instead of silicon (InGaAs-oI instead of SoI). You want a thin layer of pure InGaAs with an abrupt stop at the oxide. How are you going to do that?
A team from the University of Tokyo, JST-CREST, and IntelliEPI came up with a wafer-bonding approach that uses only silicon substrates. The main difference from a traditional SoI wafer (well, aside from the InGaAs) is that the buried oxide (BOX) isn’t SiO2; it’s Al2O3.
The approach starts with the “donor” wafer, growing inGaAs on silicon. But… you can’t do that directly because of the lattice issue. So they laid down a couple “buffer” layers instead to ease between the lattices and keep the stresses low enough to allow single-crystal InGaAs to grow: GaAs, followed by InAlAs, topped with a layer of InGaAs.
A layer of oxide – Al2O3 – was then laid over the top. Yeah, you’ve pretty much got a bunch of layers of every combination of indium, gallium, arsenic, and aluminum in there.
Meanwhile, over on another silicon wafer, another layer of Al2O3 is laid down. The two oxide tops are polished, and then they are mated face-to-face. And all of the layers of the donor wafer except the InGaAs are etched away. What you’re left with is a top layer of InGaAs ending abruptly at the BOX edge. No mamby-pamby buffer layers left.
Electron mobility in the resulting layer was 1700 cm2/V, indicating low defectivity and high quality.
Note that the economics here come not just from the silicon material per se, but also from the fact that this provides a scaling path to 300-mm wafers, which aren’t available for more exotic substrates.
You can find their report (behind a paywall) here.
A separate team from UC San Diego, Nanyang Technological University in Singapore, and Los Alamos Labs also did some InGaAs work to deal with effective wafer flipping and bonding, published earlier this year. They used NiSi to effect the bonding. Their BOX layer was SiO2 (with a thin HfO2 buffer to the InGaAs layer). But, critically, the donor wafer was InP, not silicon.
You can find that full report here.