feature article
Subscribe Now

Swarmed by Sensors

There are a lot of things about San Francisco that cause debate, but there’s one item everyone can agree on: Parking SUCKS.

Many a time I’ve driven from Silicon Valley to San Francisco (a 45-minute drive, if unimpeded), only to spend yet another 45 minutes looking for parking.

So imagine how incredible it would be if your GPS were able to point out where an available parking spot was and how to navigate to it? (And, better yet, reserve it so some other schlub doesn’t slide in right as you approach?)

This is but one of the tantalizing scenarios posed at the recent MEPTEC MEMS Technology Symposium and the MEMS Business Forum. The concept of broad sensor networks, or “sensor swarms,” is fast becoming feasible, given the rapid pace of sensor evolution.

And the benefits can range widely. Industrial companies are looking to use sensors to monitor critical process parameters – temperatures, pressures, amount of lubricant, etc. – remotely over large distances to keep things at peak performance. Inhospitable regions like the Arctic or the ocean floor or the depths of a drilling well can yield information remotely, with environmental benefits or improved science or better safety – all in addition to more money made. And then there are the sensors we might wear on or inside us for any number of useful medical reasons.

Everyone develops technology with a benefit in mind. (Well, actually, that’s not always true… but it’s often true…). Almost no one goes out to create a fundamental commercial technology that’s inherently evil. It’s just that Evil has this way of hiding in Good’s coattails, and, even if it’s not totally invisible, it’s not really in the way, and the Good is so good that we really don’t want to pay attention to the downer dude that’s pointing out those devil’s horns that are poking out from behind the angel’s wings.

But, with this topic in particular, you can tell that there were more than a few people squirming in their seats. (Figuratively speaking.) Benefits aside, the thought of sensor swarms everywhere monitoring everything gives a lot of folks the creeps. And I don’t mean only the 80-year-old Luddites trying to cling to how things were done in their day. I also don’t mean skulking conspiracy theorists that can find an evil plot in a bowl of alphabet soup.  I mean technologists like us, picturing themselves in the world they’re creating.

The brother you want?

Between the things technology can do and the times we’re in, where you can justify pretty much anything by invoking the specter of terrorism, we’re in something of a transformative time. Ubiquitous sensors invoke fears of Big Brother. And yet that scenario is apparently not scary to some: as I’ve heard it told, the instrumentation of London with cameras everywhere was accompanied by posters using the Big Brother theme to reassure citizens, not frighten them. It reeks of the culmination of Orwell’s novel: “He loved Big Brother.”

It would seem to me that there are, roughly speaking, two kinds of sensor networks. There are the ones where sensors are around us informing us about ourselves and our environment. Like cameras. They exist apart from us and we are (or feel) powerless to influence them. Then there are the ones we control – like the app on our phone that can radio our location to a friend.

The former ones raise all kinds of questions about who controls the information – or who owns it. (Heck, if a company can patent a gene it merely found, then the concept of “ownership” can be twisted in all kinds of ways.) Facebook privacy policies don’t begin to approximate the dimensions of this question.

Even the latter sensors raise questions that aren’t so cut and dried. We tend to feel better about these kinds of sensors because it’s much more comforting to think that we can control what information we allow others to see. And it becomes much more acceptable to share information if we think it’s going to bring to life a world where everything revolves around us. Intel’s Sandhiprakash Bhide gave a presentation where he described us as being at an “inflection point” where we change from “humans molding around devices” to “devices molding around humans.” We will be able to corral all of this information to customize the world for ourselves.

It’s all about Me

But, at least at present, we are in a counter-customizing trend. Think back a decade or so, and everything was about personalization. People spent a lot of time writing custom HTML code to make their MySpace (remember them?) page look exactly the way they wanted it. Ringtones announced who you were; it was all about tuning technology so that you could make a statement to the world. Everything had all kinds of settings and options allowing you to see what you wanted to see where you wanted to see it on your screen.

Imagine doing that now with Facebook or Google+. You can’t. They set things up the way they want. And people complain. And they get over it. There is a much higher emphasis now on controlling behavior: not necessarily setting things up for your convenience, but forcing you to do things in a way that creates more clicks or shows more ads or does something that has a benefit for someone other than you. (Unless you still believe that ads are a feature.)

How might this play out in a sensor-swarm world? Well, here’s one scenario that occurred to me. We have great navigation technology, and you might have created lots of online data indicating your food preferences, and future body-monitoring sensors might be able to broadcast when you get hungry. Ideally, your GPS system uses this information to look for restaurants serving the kind of food you like wherever you happen to be. This would be particularly cool if you’re on a roadtrip and don’t know the area (I can’t tell you how many times I’ve spent a TON of time driving around a small town looking for something – anything – that’s not a typical fast-food joint.)

A system that’s set up around me and my desires would make this happen. (Technology permitting.) But, using the “behavior control” concept, I can just as easily see Google* making a deal with, oh, say, Sizzler* so that, rather than the things I want coming up on the GPS, instead I see the things they want me to want, meaning that the only thing in view is a Sizzler that may even take me out of my way. They’re not sending me there because of my preferences; instead, they know that I’m not a customer, and they intend to make me one. I may actually end up bypassing things I might prefer simply because I don’t know they’re there and because it’s not in the interests of the companies involved that I find out. Mom ‘n’ Pop places don’t carry much clout and are unlikely to show up if the “search results” aren’t neutral.

And this may happen without my realizing that this is what’s going on. After all, if I knew, then I might not use the service.

The appearance of choice

Then there’s the issue of choice. Mr. Bhide was careful to insist that no one has to provide information about themselves; they always have a choice as to whether or not to “turn on their transponder” (my words, not his). But it’s not that cut-and-dried.

Two compelling use cases of sensors on our bodies that illustrate some of the tradeoffs were provided in the talk. One involves pedophiles, and it’s a tough example to use because no one much cares about the rights of pedophiles – but it puts the issue into stark relief. The benefit of sensors would be that a pedophile would wear one, and we would always know when one was in our midst. Many people with children would appreciate knowing that information.

But if this sensor-heavy world comes with the promise of control, then that says the pedophile has a choice not to broadcast – which of course would defeat the entire process. Now, yes, we could say that, as a society, if you transgress to the point of being deemed a pedophile, you give up your right to choice for the sake of others. But what this highlights is that sometimes the beneficiary of your data is not you – it’s someone else. Are you allowed to choose not to give them access to data about you that they may have paid for?

In a less incendiary example given by Mr. Bhide, let’s say that I wear a sensor that broadcasts my location. If I contract a rare and dangerous communicable disease, the CDC (or similar organization in any country) could retrace all my steps to help identify where I became infected. This could be of benefit to innumerable other people that might be able to use that information to avoid becoming infected themselves.

But that means that my every move is tracked somewhere. Who gets access to that? Who owns it? It may initially be done for health purposes, but, heck, since all that juicy data is sitting right there, being wasted on the 99.99% of people that never contract a rare communicable disease, why don’t we sell it to others in order to pay for all the expensive equipment we’ve had to buy (like the new, improved, internet-enabled machine that goes “ping”) to continue our medical research in the light of drastically reduced government funding? I might want to choose not to participate if I’m concerned about that – and yet, by opting out, I put more people at risk if I can’t help to point to the source of some disease I get.

This is already a debate in the smart meter arena, which has two angles: the dangers of wireless and the fact that the utility company could sell your detailed data to others who could then attempt to snoop on all your household activities. As a practical matter, we don’t have a choice here: the meters will go on (unless you fight really hard), and that’s that. Deal with it.

And this doesn’t take into account security and hacking issues.

Even where you get to make choices, it’s still not what you might think. Have you ever downloaded an app that requires you to broadcast some info? At least we’ve come as far as requiring notification and approval of what you’re sharing. But then you download some innocuous app that might need sharing of one or two things, and instead it comes up with a list of a half-dozen items that you have to allow it to share. Some make no sense. But they don’t explain why they’re needed, and you can’t opt out individually. It’s a package: take it or leave it. So the concept of choice is not well executed in that scenario.

I talked to Mr. Bhide about these questions, and he had two thoughts. One was simply that, whenever we get new technology like this, we go charging full steam ahead until something goes drastically awry, at which point we then start to think about more balanced approaches. This hasn’t happened yet; he seems confident that it eventually will and that trying to address these things before that point is not likely to be effective.

His other point – and this especially goes to the restaurant navigation example – is that many of the services we count on now are free. So, for example, Google might do a deal with SIzzler as a revenue source rather than charging me. Since I’m getting something for free, perhaps that’s not unreasonable. There’s no paid option for many of these services at this point, but if people get too fed up, perhaps the companies will say, “OK, we’ll be neutral in our searches if you pay for the results.”

That also makes sense for free phone apps – perhaps an option might be available to pay for a version that shares less data. But that gets a little sketchy too – I’m sure there are paid apps that share more than they need to. They could make the argument that I’m not paying enough – that the data sharing is subsidizing my use – but then anyone could use that argument about anything to justify sharing things for extra revenue on the side.

Show me the money

Which brings us to the economic side of this discussion. Yes, it might seem reasonable to pay for beneficial services. But, in many of the scenarios you might imagine – say, the location tracking one for disease tracing – you’re not really buying a service. In the worst example, the CDC could say, “Hey, you guys wanted to pay less taxes, so now we don’t get funded as much, so we need the extra cash we get by selling your info. If you want to opt out of that, then we’ll provide you that option for $5 per month.”

And that starts looking a lot like protection money or hush money: we’ve got your data, we would be giving up revenue not to sell it, so you need to compensate us if you won’t let us sell it.

Even for services we use, you have to wonder where the money will come from. We do so many things we never did before, and yet we’re not making tons more money as a result of it. In fact, many people are making less money than prior generations (especially per person, going back to when it was feasible for one average person to support a household). Think about communications: we spend so much more now than we did with simple black Ma Bell phones and free television. We spend much more in banking and other financial fees. There are many companies looking for ways to get a piece of the consumer money stack.

If we now have to start paying for Google and Yahoo and all the other things we’re doing for free – just to keep control over our information, then where will that money come from? Or will we all simply stop Googling because we are out of money? You might not think it’s a lot of money, but many of you probably make double or more the median US income (a tad under $50,000 in 2010).

I certainly don’t have the answers to all of this, and it’s a complicated set of questions – I don’t think there exists an easy answer. But, at least in forums like these, such discussions aren’t placed front and center because they muddy up the excitement of new technology. These downside questions were not part of the planned talks. It is good to see, therefore, that there are members of the audience that are thinking about these things and asking tough questions. Whether those discussions will inform policy remains to be seen – it may indeed take some disaster before we stop and think things through a bit more.

Oh, and just to be clear, and to put my money where my mouth is, I would be willing to shell out some reasonable amount of money for the ability to find a parking spot in SF in under 15 minutes. Not by broadcasting my location to everyone as a pre-condition of communicating with the parking sensor; not a ton of money; and not a monthly fee with a minimum one-year deal (I don’t go up that often), but something. How much is “reasonable?” Well… that would be telling. We all know that the first person to give a number ends up with the wrong end of the negotiating stick…

 

*To be clear, I’m not calling Google out on being evil (at least, not now), and I know of no arrangement between Google and Sizzler. I just pick them as two familiar names. At least in the US.

4 thoughts on “Swarmed by Sensors”

  1. The absolute best parking arrangement I’ve seen is at the airport in Munich, Germany. There’s a sensor over every parking space that lights a big green light if it’s empty. (No light if it’s occupied.) It’s delightfully easy to cruise through the parking garage looking for a green light on the ceiling.

    There’s also a big numeric display at each entrance to the parking garage that tells you how many empty spaces are inside. Very handy. It even recognizes motorcycles.

  2. More to your point, the idea of sensors everywhere does creep me out. Maybe it’s an age thing. Or at least, a cultural thing.

    People inherently dislike things that are different from how they were raised as children. We make our laws and customs based on how things are when we’re of law-making age. Anything that changes after that is hard to reconcile. If we had a culture that already assumed we were being watched, we probably wouldn’t mind. But our laws and customs all assume that our activities are private unless we do something to publicize them. Turning that assumption around will take a while.

  3. Personally I don’t think we ‘need’ more sensors. They are just a way of justifying making more technology and encroaching on people’s personal information.

    BTW, the issue with parking is plain and simple. We have too many cars on the road and no simple systems such as a ‘park and ride’ to take care of this. We have become self obsessed and as a result technology has been created for the person not the people.

    I am not anti-technology however I am pro ‘smart’ technology that benefits people as a whole. What happened to this mentality ??? We seemed to have lost our way.

Leave a Reply

featured blogs
Aug 19, 2018
Consumer demand for advanced driver assistance and infotainment features are on the rise, opening up a new market for advanced Automotive systems. Automotive Ethernet allows to support more complex computing needs with the use of an Ethernet-based network for connections betw...
Aug 18, 2018
Once upon a time, the Santa Clara Valley was called the Valley of Heart'€™s Delight; the main industry was growing prunes; and there were orchards filled with apricot and cherry trees all over the place. Then in 1955, a future Nobel Prize winner named William Shockley moved...
Aug 17, 2018
Samtec’s growing portfolio of high-performance Silicon-to-Silicon'„¢ Applications Solutions answer the design challenges of routing 56 Gbps signals through a system. However, finding the ideal solution in a single-click probably is an obstacle. Samtec last updated the...
Aug 16, 2018
All of the little details were squared up when the check-plots came out for "final" review. Those same preliminary files were shared with the fab and assembly units and, of course, the vendors have c...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...