feature article
Subscribe Now

Conspicuous Consumerism

CES Takes Over Vegas

The Consumer Electronics Show (CES) debuted in 1967 – shortly, it turns out, after Gordon Moore made the prognostication that has come to be known as “Moore’s Law.”  This week, in Las Vegas, CES is celebrating its 40th anniversary.  So how is the old show doing?  In those 40 years, the number of exhibitors has risen from 110 to over 2,700, and the show space has grown by over 11X.  The number of attendees has mushroomed to well over 100,000 (estimates are around 140,000 for this year’s show).  It proudly proclaims that it is the world’s largest consumer technology show.

Is this growth impressive?  Hardly.  If the show had performed as well as the companies that attend it – just keeping pace with Moore’s law — our estimates are that CES should have been somewhere over a billion attendees this year, and the exhibit space should have been well in excess of 5,000 square miles.  Would a 70 mile by 70 mile exhibit space filled with approximately four times the population of the United States be too much to expect?

If you’re attending CES this week, however, you might easily believe it was that big. And ironically, if you compared my 1967 24” Zenith color TV set with my 2006 50” plasma – you might never guess that the latter benefits from more than a million times the transistors of the former.  Why is it that we in the electronics engineering field deliver a bona-fide, measurable, million times improvement in our product over a four-decade span, and our end customer notices that the picture is a little bigger and sharper, while the tradeshow industry squeezes out a paltry 11x growth over the same time period and it feels a million times larger?

The answer is that these results are based on human perception, and we as engineers generally fail to account for perception when we’re creating our masterpieces.  Here at this celebration of electronics created for the benefit of the general public, a number of trends are in evidence.  One of these, however, is the incredible challenge and frequent failure of product engineering to fully account for the human experience.

Case in point:  I assert that, despite decades of technology advancement, today’s electronic products have actually gotten more difficult to use.  We design with the best of intentions, of course.  Rare is the consumer electronics product that fails to deliver an installation CD, a “Read me First” document, Macintosh and Windows instructions in 17 languages, an errata sheet, release notes, USB and firewire cables, a power supply compatible with nearly every civilized country’s utility system, a detailed listing of all the imaginable ways the product could be used to injure oneself, others, or the environment, a protective plastic bag clearly marked with warnings about suffocating small children, a sticker with lessons on the hazards of static electricity, and an on/off switch marked with an international icon representing the concept of power application rather than those cryptic “ON” and “OFF” labels that carelessly assumed the consumer was literate.

With all these measures in place, how is it that we professional electronics engineers are so frequently called into service to deliver technical support to our parents – those often university-educated, usually intelligent, reasonable adults that brought us into this world and raised us to maturity – to solve complex issues like “How do I check the voicemail on my cell phone?”  These technological wonders, designed by us, that are integrated almost by necessity into the very fabric of our parents’ lives, are often more confusing to operate than their predecessors.  Usually, the culprit is softness or, more specifically, modality.

In the good old days, controls were “hard”.  Every function of a device generally had a corresponding button, switch, or knob, and that control always did the same thing.  Today, however, with an embedded computing system hiding behind every faceplate, we’ve introduced a level of abstraction that a large segment of the population has yet to absorb.  In our attempts to make our devices more intelligent – concealing their inner complexity – we may have accidentally lost many of our users by introducing a layer of unwelcome obscurity. 

The discrepancy between perception and technical reality doesn’t stop at the user interface level, either.  As I walked the CES floor yesterday, I saw “1080p” printed prominently on literally hundreds of booth signs.  The vendors demonstrating this technology tout it as the purists’ high-definition.  Most engineers could tell you the difference between the progressive 1080p and the interlaced 1080i without blinking an eye.  However, after walking around for hours watching split screens with 1080p on the right and standard definition or conventional DVD on the left, and after staring intently at the 720p and 1080p versions of Panasonic’s plasma monitors, I’m convinced that few people on the planet could walk into a room and reliably tell you whether the TV was showing 720p, 1080i, or 1080p.  Even though the latter is considerably more technically challenging to deliver, the experiential benefits for the consumer are tiny at best.

Sure, I’ve seen the official demo.  With a frequency sweep or a moving solid object on a contrasting background, the motion artifacts of interlacing can actually be seen by a human eye – mine even.  However, even with a large amount of money on the table, I am not convinced that I could stare at, say, five screens playing identical programming and tell you which two were 1080p and which three were 1080i.  As an industry, our ability to measure and engineer often exceeds our customers’ ability to discern and enjoy.

On the embedded front, my initial scan through CES shows that embedded computing systems are being expected to deliver more and more of the features that were only the purview of high-end desktop computers a couple of years ago.  Multi-core and multi-processor embedded systems are almost the rule rather than the exception in consumer products now, and full featured operating systems are permeating consumer applications.  More often than not, advanced hardware capabilities such as 3D graphics-rendering engines and high-bandwidth wireless communication are also part of the equation in delivering the converged embedded experience we’re busy developing to consumers.

While we integrate and converge all that capability, however, we should all keep our mothers in mind.  The smartest phones and the most intelligent devices won’t be useful if they can’t make the hardest connection of all – the connection with the flawed, perceptual humans that are trying to integrate them into their lifestyle.  The devices and systems that can solve that problem are the ones that will be with us for years rather than weeks.

Leave a Reply

featured blogs
Oct 27, 2020
As we continue this blog series, we'€™re going to keep looking at System Design and Verification Online Training courses. In Part 1 , we went over Verilog language and application, Xcelium simulator,... [[ Click on the title to access the full blog on the Cadence Community...
Oct 27, 2020
Back in January 2020, we rolled out a new experience for component data for our discrete wire products. This update has been very well received. In that blog post, we promised some version 2 updates that would better organize the new data. With this post, we’re happy to...
Oct 26, 2020
Do you have a gadget or gizmo that uses sensors in an ingenious or frivolous way? If so, claim your 15 minutes of fame at the virtual Sensors Innovation Fall Week event....
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

Autonomous vehicles, robotics, augmented and virtual reality all require simultaneous localization and mapping (SLAM) to build a map of the surroundings. Combining SLAM with a neural network engine adds intelligence, allowing the system to identify objects and make decisions. In this demo, Synopsys ARC EV processor’s vision engine (VPU) accelerates KudanSLAM algorithms by up to 40% while running object detection on its CNN engine.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

Overcoming PPA and Productivity Challenges of New Age ICs with Mixed Placement Innovation

Sponsored by Cadence Design Systems

With the increase in the number of on-chip storage elements, it has become extremely time consuming to come up with an optimized floorplan using manual methods, directly impacting tapeout schedules and power, performance, and area (PPA). In this white paper, learn how a breakthrough technology addresses design productivity along with design quality improvements for macro-dominated designs. Download white paper.

Click here to download the whitepaper

Featured Chalk Talk

High-Performance Motor Control Solutions Through Integration

Sponsored by Mouser Electronics and Qorvo

Brushless motors have taken over the market for a huge number of applications these days. But, it’s easy to blow up your BOM cost with all the motor control and power management components required. In this episode of Chalk Talk, Amelia Dalton chats with Marc Sousa of Qorvo about the Power Application Controller (PAC) that can lower your BOM, trim down your component list, and give you several other benefits as well.

Click here for more information about Qorvo Power Application Controllers®