posted by Bryon Moyer
Google is known for engaging broadly across a range of seemingly unrelated projects. Whether glasses or driverless cars or even same-day delivery, it’s as if they need to be everywhere so that, when something happens, they’re a part of it.
But some of these different projects appear to be coming together a bit more with yet another service that’s about to roll out: Google Stuff. Unlike their prior efforts, however, which tend to virtualize real things, this offering comes full circle, applying much of what they’ve learned in the data world to everyday items.
The idea is that they are building a number of giant warehouses. When you sign up for the service, you get a real live human agent that comes to your house. They start by observing how you live, what you do, and, in particular, which things you use most frequently. Based on this, they will take almost all of your stuff and move it to their warehouse. They’ll leave a cache of your most commonly-needed items at your house; everything else goes.
When you need something, your agent will arrange for quick delivery (same-day for the Bay Area at this point) from the warehouse. If the agent notices changes in your lifestyle or major life events like a baby coming, then the cache may change; they’re working on algorithms to best gauge what you need at any time, with a goal of your not noticing that you don’t have all your stuff right there in the house.
When grocery shopping, for instance, you’ll do exactly what you’ve been doing, except that, at checkout, you won’t take the groceries to your car: rather than your having to haul the stuff a half mile to your house, your agent will have it shipped to the warehouse a mere thousand miles away, and then the specific things you need will be delivered when you need them based on a “metering out” algorithm determined by your past practices.
In the pilot cases, this has worked pretty well except in situations where people are, for example, buying extra food and drink for a party. The algorithm can’t really tell this, so it just assumes you’re stocking up and continues metering at the normal rate. They’re trying to improve this based on a few party disasters where none of the food or drink showed up.
It gets a bit silly if you buy something for immediate use, like a sweatshirt. You still need to ship it to the warehouse, from which it will immediately shipped back. There doesn’t seem to be any way to avoid that extra round trip.
The touted benefit of this is that we can all have much smaller houses. We will no longer need space for all of our stuff; Google will have it. We can also get much better fire insurance rates because our stuff won’t get burned up in a house fire; it’s protected in the warehouse.
There are some obvious concerns, like what happens if there’s a problem accessing the warehouse or if your agent gets sick. At present, unless you have a complete connection to the warehouse, you simply can’t get your stuff (unless what you want is fortuitously cached locally).
It also sounds very labor-intensive, what with all the agents and truck drivers moving stuff around. But that aspect of the model is temporary. The driverless vehicle project is partly designed to remove the humans from delivery, using autonomous trucks. And the agents themselves are simply prototypes for the robotic agents that will ultimately be able to handle the job more reliably without, for example, getting sick and threatening your access to your stuff.
And what does Google get out of this? There have also been some rumors amongst the pilot-liners that Google actually makes money on the side by loaning out your stuff, particularly the things you don’t use often. There’s even talk of a secret “pack-rat” algorithm that can figure out what stuff exists merely for the thrill of acquisition, meaning the owner will never ask for it again; such stuff may be sold off.
It’s hard to tell whether these tales are true, since there is no user-accessible accounting of transactions, so if something goes into the warehouse, you can tell that it’s there, but if it disappears, then there is no remaining evidence that it was ever in there. It’s assumed that there are internal logs; you may not be able to get a record of it going in there, but law enforcement can (creating an issue for drug stashes that suddenly end up in the warehouse). Google has declined comment on these rumors.
Bu the big win for Google is a separate derivative product, Google Lifestyles, a service made available to advertisers and vendors, who will receive a daily report of everything you do and all transfers to and from your warehouse section. This is eagerly anticipated by companies who will finally be able to see who uses their competitors or who doesn’t even use their products at all. This is expected to be the biggest revenue generator for Google from this entire project.
There’s always the question of user control, but like most things Google, you hand control to them. You can’t partially use the service; once you’re signed up, you can’t touch any of your stuff. They completely take over. Of course, there’s no live help or customer service (can you imagine having to staff that up for something this big?). So it has that classic air of Google mystery, which most people deal with simply by resigning themselves to the fact that they’re no longer in charge of their own stuff and go back to watching World’s Funniest Videos.
Rollout of Google Stuff will happen in phases over the next six months.
posted by Bryon Moyer
I’ll round out the last of the things that caught my attention at this year’s ISSCC with a proposal and implementation of an AC-biased microphone. This is done based on projections that the biasing resistor for a traditional DC approach will head into ridiculously high territory – teraohms and higher.
The team, from NXP and Delft University, lists a number of problems that this causes.
- The connection between the MEMS die and the ASIC can easily pick up stray electrical noise due to its high impedance, meaning expensive packaging is required to shield this node.
- Creating a poly resistor of this size would be enormous; instead, active devices biased below their turn-on voltages are used. But leakage currents from neighboring ESD structures can find their way through these, with the ultimate result being increased noise.
- They say that chopping can’t be used to reduce flicker noise because of the extra input current the switching would cause; increased transistor sizes are needed instead.
- The on-chip bias generator will typically be a charge pump, and ripple noise could push the shut-off active devices used as resistors to turn on slightly; therefore large filtering caps are needed.
Their approach is differential, and they modulate the signal while cancelling out the carrier using cross-coupling caps; there are, in fact, three caps that have to be tuned to match the microphone sensing cap, and they have an 11-bit register for each of them.
Critically, feedback resistors are used to set the common-mode bias level; that and the fact that their contribution to in-band noise is now low due to the modulation mean that resistor values can be brought back down well below a gigaohm.
While you might expect the increased complexity to make the ASIC larger, in fact quite the reverse is true (presumably due to smaller components): the ASIC is 1/12 the size of the current state of the art. Expensive shielding is also no longer required to reject external noise.
They weren’t overwhelmed by the SNR they achieved, in the 58/60-dB range, but they commented that, with some focus, they could easily get to 64/65-dB levels.
For those of you with the proceedings, you can get much more detail in session 22.2.
posted by Bryon Moyer
Not long ago, in our coverage of 3D vision, we discussed time-of-flight as one of the approaches to gauging distance. Even though it and the other 3D vision technologies are gunning for low-cost applications, it’s easy, at this point, to view them as exotic works in progress.
Well, time of flight is now being put to use for the most prosaic of duties: making sure your cheek doesn’t accidentally hang up on you.
Of course, our phones already have this feature via their proximity sensor, installed specifically for this purpose. It detects when the phone is near the face and shuts down the touchscreen, both saving power and rendering it immune to the random input it would otherwise get as it hit your cheek now and again.
As STMicroelectronics sees it, however, the existing way of judging proximity leaves something to be desired. Right now, it’s a simple process of sending light out and measuring how much gets reflected back, a method that can depend on a lot of factors besides proximity. How often such sensors fail isn’t clear to me, but ST has come forward with a new approach: using time of flight to measure how long it takes the light (regardless of the quantity of light) to make a round trip.
They do this by co-packaging an IR LED emitter, an “ultra-fast” light detector, and the circuitry needed to calculate the distance from the measurements. It also contains a wide-dynamic-range ambient light sensor.
Is all of that needed just to keep your phone from getting too cheeky? Well, it’s clear that that’s simply the “marquee” function they address. On the assumption that you can do a lot more interesting stuff if you can measure with reasonable accuracy how far away something is (as opposed to a more binary near/far assessment), they’re betting that phone makers will want to include it so that both they and enterprising apps writers will come up with all kinds of interesting new things to do. It changes the class of apps it can manage from digital to analog (in the sense I defined them when discussing accelerometer applications).
Used in such other applications, they’re targeting a distance range of up to 100 mm (about 4 inches for those of us that grew up with non-metric intuitive visualization). They think it will work beyond that, but they’re not committing to that at this time.
You can find more info in their release.