feature article
Subscribe Now

Putting the User First?

A mathematics professor was in full flow in a post-grad seminar. The board was covered in formulas, and, as he finished writing an equation, he said, “And from this, gentlemen,” (it is a very old story). “And from this, gentlemen, it is obvious…” and his voice died away. He stood there for a few seconds and then he sat down for a few minutes. He left the room and returned after fifteen minutes. Picking up the chalk he resumed, “And from this, gentlemen, it is obvious that…” and wrote another equation on the board and continued the seminar.

We all know that mathematicians are different, but, to some extent, so are engineers. And this brings us to the starting point for today’s sermon. What is obvious to one person is not to another. Even more, what is obvious to a developer of a product is often not at all obvious to the user of that product. And this is symptomatic of the way in which we often don’t think about the user during the whole cycle of product development.

Start with the development of a new product. It could be a consumer device, it could be a web site, or it could be something for other engineers to integrate into a bigger system. In many areas it has now become commonplace to use focus groups to try to understand what the customers want. Now just getting a group of people in a room and discussing what they think they want looks very sensible. And focus groups have their uses, if you are interested in what people are thinking. They are very good for gathering information about how customers view one company against another or for measuring their understanding of a new or emerging technology. They are also great for developing and monitoring marketing communications programmes, for example. And, properly used, they can identify the issues that people have. But, as Henry Ford is reported to have said, “If I had asked people what they wanted, they would have said faster horses.” Instead he identified what people needed – better transportation. No one at Sony carried out focus groups for the Walkman. And Steve Jobs is famously against them. But in all these cases the understanding of what customers will need drives a creative person or team to create products that meet, or go beyond merely meeting, customer needs.

Wittgenstein, the philosopher whose own thought processes are not straightforward or easy to understand, said, “Whereof one cannot speak, thereof one must be silent.” Until you have words for something, it is difficult to visualise it, to think about how you can use it, or to even know whether you will want it. A personal computer means nothing if the only computer you have ever seen is the behemoth of an IBM mainframe. And, even if you know computers can be far smaller, it still requires a leap of imagination to see how they can be used. Ken Olsen, of Digital Equipment Corp, certainly understood that computers could be smaller — the PDP11 could easily fit under a desk — but he is widely quoted as saying, in 1977, “There is no reason for any individual to have a computer in his home.” What he was attacking, in fact, was the idea that, in the home, heating, air conditioning, lighting etc, would be controlled by a central computer. What his imagination didn’t stretch to was the idea that processors would become so cheap that they would be widely embedded and linked, as is happening now. Olsen also said, “People will get tired of managing personal computers and will want instead terminals, maybe with windows,” predicting, as others have, the idea of the Cloud, being energetically puffed today by Google et al.

Back to the central thread: when thinking about a new product, the starting point should be defining what people need, whether they are consumers or the guys building the next embedded system. And I know it is easier to write that sentence than it is to actually define what people need.

Once the need is established, then the thinking should be extended to imagining how people are going to use the product. User interfaces are far too often added almost as an afterthought. (And designed by the people who built the product.) Instead, the user interface can be the starting point of the design: implementing what the user requires the system to perform rather than adding an interface that reflects what the developer has built. The user interface has to be tested by potential users, not the guy who is going to implement the system. And, ideally, it should be implemented by someone who understands human-machine interfaces and has at least a smidgen of design sensitivity. Apple’s iPhone and iPad interfaces look good and work well: many of the copies of them do neither.

Multi-layer user interfaces are a great approach, but they should have as their top layer what the user requires, not what the developer or the marketing group think is interesting, sexy or fun. Years ago, a book on Word for Windows claimed, “The toolbar you see [in Word] is not the toolbar that is most useful to you: it is the toolbar that Bill Gates and the marketing guys have decided will sell you Word.” And we all remember Clippy – Microsoft’s extreme expression of how the company always knows better than you what it is you want to do.

(As an aside – I have almost given up arguing with people who tell me that their interface is “intuitive.” Often what they mean is that users who have learned how to use the Windows interface will find it easier to use their product. Or that their engineers find it easy to use. In neither case is that intuitive – it is learned behaviour. To a child, red does not intuitively mean Stop, nor does green mean Go. We have learned it. This is probably a more widely learned reaction in a developed society than the Windows interface, but you can’t use the word intuitive to describe it.)

Working on the user interface often helps to make a clearer product definition. And testing it with a range of users, in formal or informal contexts, as well as making sure that the interface is useful, may well provide a better understanding of whether the product meets a user need.

Prior to the building of a product is also a good time to begin writing a user manual, even if it is to be implemented as help screens rather than as a printed document. The manual can be seen, perhaps, as an expression of the product specification in language the user understands, and it should not be written by a developer. Again, it should be test driven. There is no point in support telling a user with a problem to RTFM when the FM is F-unreadable.

If it is necessary for a developer to write the manual, perhaps because of time or budget constraints, then at least they should see if it can be translated into English (from engineer-speak) by a non-engineer and road-tested with a few potential users.

Error messages are another significant issue. “You have committed an illegal action,” may make sense to the person who wrote it, but, as an error message on a consumer item, it is not helpful and has caused serious concern, with naïve users expecting a police raid.

Things are improving in some areas. I have recently been talking to companies making chips for the automotive market, and they are talking, not just to the people who will buy their chips, the Tier One suppliers, but to the car makers themselves, and drawing on information that the car makers have on what they think drivers need. In the last few weeks we have seen chipmakers signing deals with ARM so that they can influence the longer-term definition of ARM’s products. Yet, as an industry, there is still too much of the thinking that says, “This is really cool — let’s turn it into a product and get rich.” Apart from the boring detail that turning something into a product takes at least as much time and money as building it in the first place, who is going to want it? What user need does it fill? And is the user going to be able to use it?

Leave a Reply

featured blogs
May 24, 2024
Could these creepy crawly robo-critters be the first step on a slippery road to a robot uprising coupled with an insect uprising?...
May 23, 2024
We're investing in semiconductor workforce development programs in Latin America, including government and academic partnerships to foster engineering talent.The post Building the Semiconductor Workforce in Latin America appeared first on Chip Design....

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Achieve Greater Design Flexibility and Reduce Costs with Chiplets

Sponsored by Keysight

Chiplets are a new way to build a system-on-chips (SoCs) to improve yields and reduce costs. It partitions the chip into discrete elements and connects them with a standardized interface, enabling designers to meet performance, efficiency, power, size, and cost challenges in the 5 / 6G, artificial intelligence (AI), and virtual reality (VR) era. This white paper will discuss the shift to chiplet adoption and Keysight EDA's implementation of the communication standard (UCIe) into the Keysight Advanced Design System (ADS).

Dive into the technical details – download now.

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
14,592 views