Aug 20, 2015

Why Nigerian Bank Scams Are So Obvious

posted by Jim Turley

The old "Nigerian bank scam" is so laughably obvious that nobody could possibly fall for it anymore, right? Not so, and some new research suggests that its cheesiness is actually deliberate.

The plaintive appeal to wire money to a certain displaced Nigerian prince is one of the oldest tricks in the book. It's so obviously bogus that it's become the archetype of low-end spam. Everybody's seen it a hundred times, and even the dumbest spam filter can spot it a mile off. The very presence of the word "Nigeria" sets off alarms. Can't the stupid spammers come up with even a little improvement on the old formula?

Nope, and for good reason. In a paper from Microsoft Research, Cormac Herley explains that the scam's obviousness is deliberate. It's carefully designed to appeal to only the most gullible segment of the population. Unlike most other types of spam or phishing scams, he explains, the Nigerian bank scam requires the scammer to put in some effort to reel in his victims. Thus, it's in his best interest to hook only the dumbest and most gullible targets and avoid the (relatively) smart ones. He doesn't want to waste time luring potential victims who might later see through his scheme and alert the authorities or waste his time. Only the dumbest need apply.

"Since gullibility is unobservable, the best strategy is to get those who possess this quality to self-identify... The goal of the email is not so much to attract viable users as to repel the non-viable ones, who greatly outnumber them. Failure to repel all but a tiny fraction of non-viable users will make the scheme unprofitable."

So count yourself lucky that you recognize the scam when you see it. And stand in wonder at those who don't.

Tags : email, scam, malware,    0 comments  
Aug 20, 2015

mCube’s New Low-Power Accel: What It Is and Isn’t

posted by Bryon Moyer

mCube made more noise recently with their announcement of a very small, low-power accelerometer. There were a number of aspects to the release; some intriguing, some… less so.

Let’s start with intrigue. The whole focus here is on a small device that can be used in space-constrained, power-stingy applications – like wearables. Obviously space is critical in any such device, but they point out that flexible circuit boards can enable more… well… flexible shape designs. And, while the accelerometer isn’t itself flexible, the closer you can come to an infinitesimal point on a flexible board, the more likely your connections are to remain intact. So, on flex boards, small=reliable.

They get the size by stacking a MEMS wafer with through-silicon vias (TSVs) over a CMOS wafer (all of which is then garnished with a cap wafer). This means that bond pads are needed only for actual connections to the outside world, not for intra-package die-to-die connections, which can take a lot of space.


Cost is also mitigated by using an old process with fully depreciated equipment. Right now, they’re at 180 nm; they could go to 150 without spiking the cost curve. In addition, all of the steps – from the different wafers to bonding them – are done in a single fab. This is as compared to other processes, where wafers have to be bundled up and sent to different fabs for different parts of the process.

They’ve also built in a couple key application-oriented features intended to go easy on battery life. First, you can tune the sample rate – fast for tablets and phones that need to be responsive enough for games (a couple thousand samples per second), slower (400 samp/s) for wearables. Second, they have power modes: a normal mode at 4.7 µA (50 Hz), a single-sample mode at 0.9 µA (25 Hz), and a “sniff mode” at 0.6 µA (6 Hz).

Sniff mode monitors for activity, sending an interrupt when detected. The threshold for what constitutes “active” can be tuned to suit the application.

So, functionally, this seems to compete pretty well. Which is really all that should matter. The less intriguing bits have to do with the marketing and what feels like playing a little loose with terminology. Any good marketer knows that it’s great if you can carve out for yourself a new market or “category” so that you have no competition. Problem is, too many folks have read that in their business books, and try it too often.

Here, mCube is trying to define the “IoMT” – Internet of Moving Things – as a separate thing. This suggests that, somehow, items with IMUs constitute this separate class of system. Sorry, but it just doesn’t work for me.

A little more worrisome is their use of the word “monolithic.” As in, they’re claiming a monolithic solution. First, “monolithic” literally means “from one stone.” This is not from one stone – it’s from three wafers. Monolithic would be if the MEMS and CMOS were fabricated out of the same wafer. (I won’t quibble about the cap wafer.)

They even use this to distinguish themselves from InvenSense, who uses what they call a “stacked” approach. They say that this distinction is significant enough to define the end of “Sensors 2.0” and the beginning of “Sensors 3.0.” Again, more new categories. Again, not working for me.

The only real difference here from InvenSense is that InvenSense inverts the MEMS wafer and bonds face-to-face (and does this in some other fab, by implication). mCube stacks bottom-to-top, connecting with TSVs. That has some benefits – don’t get me wrong – but it doesn’t feel to me like the revolutionary birth of a new category.

OK, kvetching over. You can find more information about the tiny new mCube accelerometer in their announcement.

Tags :    0 comments  
Aug 18, 2015

Two Kinds of IoT Fog

posted by Bryon Moyer

We’ve heard about the role of the Cloud in the Internet of Things (IoT). It’s analytics and other decision-making that happens in some remote server farm somewhere to serve some “edge-node” device connected over the internet. And you’ve probably heard of the variant on that called the “fog,” where some of that computing is done on a machine local to the edge node, reducing communication traffic and latency.

But… did you know that there are two flavors of fog? And that this actually has an analog in the IoT?

San Francisco is famous as a foggy city, but, unless it’s really bad, the fog doesn’t actually hit the ground. It’s generated by the “marine layer,” coming in off of the cold ocean waters. It’s the West Coast version of the East Coast’s humidity – with no warm Gulf Stream. Newcomers will sometimes look up during a typical San Fran foggy day and wonder, “Why do you call this fog? It’s just cloudy.” (Until you get to a hill or the Outer Richmond, anyway.)

What most people are used to is ground fog (also locally called tule – “too-lee” – fog , named after a prevalent form of bulrush in the local delta from which the fog might seem to arise). Because ground fog originates with moisture on the ground, it never seems simply cloudy. It’s always all the way down; the only question is how high it rises. (And how thick it is…)

So on the one hand, we have a type of fog that might appear to be higher up, with a tendency to descend towards the ground, versus one that starts at the ground and then rises up. This distinction came to me based on a discussion with Olea Sensor Networks during the June Sensors Expo. (They came up in my prior discussion on interoperability.)

They write custom analytics code for IoT devices. In the classic IoT model, such analytics would be performed in the Cloud – with perhaps some offloading into a local machine. This descent of analytics from on high might resemble the San Francisco marine-layer type of fog, where it mostly seems like clouds until it descends.

But Olea noted that they mostly don’t see analytics happening in the Cloud – at least not so far. The analytics they write execute on the edge node devices themselves – and perhaps in other local servers and gateways. But definitely on the local side of things, not the Cloud. So here the fog is rising from the ground up.


Of course, as I’ve noted before, we’re in early stages of IoT build-out, with many folks struggling simply to make their phones work as remote controls – never mind the analytics. So this could be a transitory phase. But where you place the analytics has obvious implications for the resources needed on your edge-node device. Which has implications for cost.

What this also suggests is that current analytics functions are relying solely on data from the one device or perhaps from a few local devices that have access to the same server or gateway. Which probably also reflects the youth of the IoT.

At the point when more architects take advantage of data available only through the internet – things like social media feeds, perhaps map feeds, etc. – and work them into the analytics, then the Cloud may be a better place to bring all of that together. Likewise when combining data from edge nodes that are not collocated.

At which point, some tornado will come along and suck it all up into the Cloud.


[Editors note: Updated to correct company name from Olea Sensors to Olea Sensor Networks.]

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register