posted by Bryon Moyer
In our coverage of sensors, we’ve seen increasing levels of abstraction as microcontrollers in or near the sensors handle the hard labor of extracting high-level information from low-level info. These are the hipster sensors that go on the wearables that go on your person for a month and then go on your nightstand.
Today, however, we’re going to get grittier and more obscure. Some sensors have more of a blue-collar feel to them, and I discussed two examples with Microchip back at Sensors Expo.
The first is a current sensor. Specifically, a “high-side” current sensor, meaning it goes in series with the upper power supply rail (not the ground rail). It can report current, voltage, or power. The unusual thing about this unit (the PAC1921) is that it provides both analog and digital outputs. “Why?” you may ask…
So much has moved to digital because, well, data can be provided in an orderly fashion, queried as needed by inquiring processors. FIFOs and advanced processing are available in the digital realm, and if you’re maintaining a history of power supply performance, digital is a great way to keep that tally.
Digital does, however, introduce latency. If you’re sensing the current and using the result in your power management algorithm, a bit of latency means that… oh, say, the voltage gets too high and you measure that and then digitize it and then put it someplace for a processor to find and then – oh, now look at that mess! Analog works much more quickly in a control loop. So here you get both.
(Image courtesy Microchip)
Then, off to a completely different unit: a temperature sensor. Well, actually, not the sensor itself, but the wherewithal to calculate temperature from a thermocouple.
Apparently our penchant for integration and abstraction has lagged in this corner of the world. While thermocouples can generate a voltage based on the temperature, calculating the precise temperature based on that voltage has been a discrete affair (not to be confused with a discreet affair). It requires lots of analog circuitry to measure the microvolt signal (typically done at a “cold” junction, away from the actual heat), digitize it, and then perform the math.
That math reflects the fact that thermocouples have a non-linear relationship between their output voltage and the temperature. And the details vary by thermocouple type. So this calculation is typically done in an external microcontroller.
This would make the new MCP9600 the first device fully integrated with all the bits needed to convert volts (from the thermocouple) into degrees Celsius. They refer to it as a thermocouple-conditioning IC, and it works for a wide range of thermocouple types (K, J, T, N, S, E, B and R for those of you keeping score).
(Image courtesy Microchip)
posted by Bryon Moyer
You may be aware that Intel went through a layoff recently. Whatever you think of the merits of the layoff itself, it would be hard to argue that it was executed smoothly. But, to hear some tell it, the result has left something of a crater in the morale and confidence of at least some of the surviving workforce. And this is a group that has seen many layoffs in the past. So what was different here? Why has there been such unusual attention and even discussion of lawsuits?
I was able to get some idea of what happened and how it happened via input intended to be anonymous. For those of you wondering what all the fuss is about, it gives some color to what’s going on. To be sure, this is a one-sided story: Intel has steadfastly declined comment (well, except the CEO – more on that in a minute). I offered up a “fact check” of some of the critical points that follow; it was respectfully declined.
So, with that caveat, let’s start with the obvious: layoffs suck. They suck for everyone around them. I’ve personally experienced the whoosh of a near-miss as well as the direct blast myself. Whether it was me or the guy next to me, neither of us enjoyed it.
Silicon Valley has historically grown by leaps and layoffs. Humans are expensive assets, so at the slightest sign of a business sniffle, it can be tempting to offboard some of this burden. After the 2008 meltdown, it almost felt like some companies that were doing well still had to lay people off just so that they didn’t look to the shareholders like they weren’t minding the store. “Everyone else is laying off, so we need to also. The remaining people can just work harder.”
So layoffs are a well-established part of Silicon Valley culture; nobody likes them, but they happen, and we know that. There are also rules when it comes to layoffs: above a certain size and you have to make them public. And you need to give a certain amount of notice to the people you’re laying off. Neither of those happened; the layoff was made public by a leak. And Intel notified folks only 30 days ahead, so they compensated by 2 months extra severance (and the layees-off didn’t have to sign anything to get that).
But that’s just iffy execution; that’s not the main problem. The first, and biggest, main problem is the sense that the rules got changed for convenience (with suspicion of ageism – more on that in a mo). Nothing will rattle a workforce like finding out that the rules have changed – especially when it comes to compensation and employment.
To understand that, we need to dig into how compensation happens and how the layoff was implemented. Intel has a review process (“focal”) like any other company. Until recently, there were five categories: Outstanding, Exceeds expectations, Satisfactory, Below expectations, and Improvement required. This year, the “Below expectations” category disappeared: if you weren’t Satisfactory or better, then you were at the lowest level. A “low performer” is defined in their handbook as someone who got “below expectations” or “improvement required” 2 times out of the past 3 years.
Compensation apparently isn’t strictly tied to the review level, but obviously, consistency between the review and compensation make for a single, clear message. Good reviews and bad reward (or vice versa) make a confusing message. On the other hand, as any manager knows, employees at the top of their pay range as well as limited budgets can make it hard to put the money where the mouth is. It helps to have multiple tools for compensation.
Intel does have multiple tools. There are three components to the review cycle: the pay raise, a bonus target, and restricted shares. Options are given only to higher-level management; others get shares outright (the restrictions have to do with vesting and such).
And here’s where it gets complicated. Older employees are likely to be closer to the top of their pay range (simply by virtue of having gone through more review cycles). In addition, employees closer to retirement are less likely to benefit from long-term growth of stock. They’re getting into cash income territory – just like rebalancing portfolios from growth to income-earning investments.
So managers could, as a way of managing their budgets and allocating rewards, give their older employees more bonus cash and less in the way of stock. Younger employees might get the reverse. It wasn’t an official policy; it was at the discretion of each manager.
The point here is that employee expectations were set to believe that your measured reward contained three components, not just one.
So that’s how compensation is (or was) expected to work. Then came the layoff, and the criteria for being laid off were three:
- Current or repeat low performer
- Got an “improvement required” during the past year
- Were low on their stock grants
Notice that last one: the stock allocation was used as a proxy for the entire review. In particular, folks that got bonus instead of stock weren’t recognized for the bonus and were categorized as poor performers. This is where the “changing the rules” sentiment comes from. Essentially, a three-legged stool had two of the legs cut off. Some managers were apparently able to argue on behalf of well-performing employees that had fallen afoul of the stock thing, but those were the exception, not the rule.
So the first problem here is a perception that the rules changed. But there’s a second, more subtle issue. Because there was a tendency to bias older worker compensation against stock and towards bonus, there is the sense that this was done to bias the layoff against older workers. This is part of the rattling of legal swords.
Which brings us to another rule change. In a June 18th informational session, three days after the layoff was announced, the people being laid off were told that, after a two-day cooling off period, they could apply as contractors. Normally the wait was 12 months, so this felt like a significant concession.
But when some folks tried to apply as contractors, they got pushback. Upon digging, it appears to be a nuanced thing. Intel cannot tell an outside consulting agency whom they can and cannot hire or place, but the business units can decide whom they want to accept, and they’ve barred those affected by this layoff. This hasn’t been widely communicated; only those that tried to apply as contractors are likely to be aware. So its contribution to any malaise would have been through rumor and internal blog.
So part 1 of why there is concern in the remaining workforce is the uncertainty of changing rules.
Part 2 was the fact that Intel at first tried to deny publicly that this was happening. That is, until the relevant memos got leaked to the press. That made it hard to keep things under wraps.
Which led to part 3: the CEO saying publicly that this was all about meritocracy. In so many words, he publicly announced that all of the laid-off engineers were bad employees and got what they deserved. This is not to say that all of those laid off were exemplary; there presumably were low performers in there. But the executive comments felt to me, as an outsider, as him pushing faces into the mud.
So, in review:
- Layoffs suck. Always.
- It’s worse if you’ve been playing by certain rules, and then those rules are changed to your disadvantage with no opportunity for appeal. That hurts credibility long-term.
- It’s worse yet if your company publicly denies it’s doing what it’s doing. Another credibility hit.
- And it’s worse yet when, forced to admit what’s happening, the CEO publicly denounces the people just laid off.
That, to my understanding, is why this layoff has caused so much commotion. I don’t claim to speak for all Intel employees, so feel free to comment below if you feel otherwise. (Or even if you agree…)
posted by Bryon Moyer
WiFi, Bluetooth, and Zigbee are vying for primacy, and none of them is likely to disappear anytime soon (if ever). WiFi is the granddaddy, and arguably the most familiar. So, of course, WiFi is going onto all kinds of stuff. We see plenty of WiFi modules and chips, but CEVA suggests that you can save space if you integrate the WiFi directly into your SoC. (Assuming you’re doing an SoC…)
Sounds straightforward, but if you dig in just a little, an obvious question comes up: which WiFi? If you’re going the route of the PC, then you put on the most advanced version that has some level of support out there. Most likely, that’s still 802.11n, although (from a quick web scan) more expensive routers are now supporting 802.11ac.
But everything has a cost, and performance costs power, if nothing else. If you’re plugged into a wall, then wasted power is just wasted power. If you’re on a battery, on the other hand, then wasted power is a premature dead battery. So now there’s a choice to be made.
Another implication of the WiFi choice is the antenna arrangement. The newer versions support multiple-in/multiple-out (MIMO) configurations, which establish multiple “spatial channels.” You recognize these by the multiple antennas on a router (and the antenna configurations are typically indicated as AxB, where A is the number of antennas on the router and B is the number on the end station).
CEVA recently announced their RivieraWaves WiFi IP offering, and it’s divided three ways depending on how the WiFi will be used.
- For power-sensitive applications that don’t need the speed – like Internet of Things (IoT) edge nodes filing their sensor data reports – they offer 802.11n in a 1x1 configuration, referring to it as their SENSE version.
- For devices that need to shuttle more data around – surveillance, smartphones, wall-plugged smart-home IoT nodes – they bump up to 802.11ac in 1x1 or 2x2 configurations. They call this their SURF option.
- For heavy data use – routers and infrastructure and such, scaling to hundreds of users – they have a third version that configures 802.11ac in a 4x4 arrangement, named the STREAM option.
(Image courtesy CEVA)
The upper and lower MAC components are agnostic as to the processor or operating system is in charge. The modem functionality can be configured as optimized hardware or for software-defined radio for integration into a multi-protocol platform. And, of course, CEVA says that they’ve been designed to minimize power consumed – especially for the lower-end devices.
You can find more detail in their announcement.