In Isaac Asimov’s wonderful books, he creates the Three Laws of Robotics. (He later added a zeroth law, but we’ll skip over that for now.) You probably already know them by heart, but the first law for all robots was, “never injure a human or, through inaction, allow a human to come to harm.” It was the cybernetic equivalent of the Hippocratic Oath: “First, do no harm.”
The Second Law was, “always obey orders, unless it conflicts with the First Law.” Okay, pretty straightforward, that one. Do what you’re told, at all times and without question, unless it would harm somebody. In other words, robots can’t be ordered to murder someone, but neither can they be given legitimate and well-meaning instructions that might harm someone unintentionally. It’s automatic bug-detection in their programming, essentially.
Finally, the Third Law covers self-preservation: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
It’s the Second Law that most directly applies to embedded development. Embedded systems should do what they’re programmed to do, always and without question, unless doing so would harm the user. But here’s the nuance: does the embedded system follow its developer’s orders or its user’s orders? Should it be more loyal to its creator or its owner?
Those might seem like the same thing, but they’re not, and the difference is important. Your customers believe – and rightly so, in my book – that they own the machine and therefore should have total control over its actions. Users may not understand the complexities of embedded-systems programming, nor do they usually care. We can view that as ignorant and spoiled, but we’re better off accepting it as entirely reasonable. A “teachable moment,” you might say.
Where am I going with this semantic hair-splitting for droids? It’s about making better user interfaces, and here’s the big rule that’s missing from most embedded systems:
First Law of Embedded Systems:
Never ignore user input, unless doing so would harm user data.
I’ve made this the first law because it’s the most important. Specifically, it comes before rules about speediness, compatibility, or real-time performance. It comes above everything – and I do mean everything – but too many embedded programmers ignore it. They act like the system belongs to the developer and not to the end customer.
Case in point: Microsoft Windows. I know, Microsoft is an easy target, and we all have our pet Windows peeves. To me, the absolutely unforgivable fault of Windows is that it often ignores user input. Microsoft Office, Explorer, Outlook… they all do it. You wave the mouse or click the Cancel button but nothing happens for a while. At best, you get a little hourglass or a spinner to acknowledge, “yes, I heard you, but I’m busy right now.” That’s ludicrous. Any computer running a multithreaded operating system on a quad-core, 64-bit processor with gigabytes of RAM at its disposal ought to respond to user input right-the-hell-now. It should be so fast and so responsive that you can’t even perceive a delay.
I’m not talking about mouse pointer lag; that part is fine. It’s the delay between the user’s input and something actually happening that’s broken. Windows takes too long to follow orders, which is different than just acknowledging the order with the RGB equivalent of “talk to the hand.”
I don’t care what the computer is doing. Seriously. Drop network packets if you have to. Abort complex calculations. Skip disk I/O. Lose buffers. It doesn’t matter what work-in-progress is lost or how inconvenient it will be for the machine to restart whatever it was doing. All work should come to an immediate halt the instant the user taps or clicks or presses the button that means “do this other thing now.”
It’s especially galling when you want the machine to stop the very process that’s causing it to ignore you. What’s the point of a Cancel button to abort a time-consuming process if the computer doesn’t respond to the Cancel command? If I want to cancel a lengthy download, the network traffic should go dead and the buffer go cold before my finger leaves the mouse button. The “Wait…” dialog should be removed from every user interface, everywhere. It’s a sign of bad programming and misplaced priorities.
Lest you think we’re simply indulging in some therapeutic Microsoft-bashing, Apple is just as bad. The iPhone has a nasty habit of downloading tons of e-mail at the slightest provocation. That can be a painful time-waster, especially over a slow wireless connection. For example, if you want to forward a simple text link to someone, iOS conveniently provides a nice button for that very purpose. But if you use it, iOS starts by doing an entire e-mail refresh, downloading and updating all the messages in all your e-mail accounts – before sending the link. So instead of a quick send-it-and-forget-it, you get treated to megabytes of server traffic, which finally ends with a cheery “ping!” when your e-mail accounts are all up-to-date. None of which you asked for. It’s like a dog that won’t stop fetching a stick you’re trying to throw away.
How hard is it to send a simple text link? All iOS requires is a send-to address and your mail server’s login credentials: maybe 100 bytes of data in total. Instead, it goes through the whole e-mail synchronization process. Who thought that was what the user wanted?
Over in the consumer-electronics realm, even the mighty TiVo (which typically is the model for good user interfaces) has stumbled. Early TiVo boxes responded snappily to the remote control, which is a good thing since that’s the only way you interact with the box. But after a mandatory software update, the interface developed a noticeable delay. There’s now a lag of perhaps 100 milliseconds from button push to screen update. Not a big difference, but just enough that the back of your brain senses something is wrong. The hardware is obviously the same, so it’s clearly a software problem. What could TiVo’s firmware engineers have done to screw it up, and what did they think was more important than user-interface responsiveness? Something got moved up the RTOS priority queue, and the owner’s wishes got demoted.
I’m sure we’ve all seen other examples, too; elevator buttons that don’t light up quite when you push them; tablet computers that don’t respond to a swipe motion quite on time. The in-dash navigation systems in new Jaguars have such a bad lag that owners have returned the car to the dealer thinking it’s broken. Those are all small things, but those sorts of delays don’t inspire confidence in the passenger/customer. More to the point, they ignore what the user asked for. Forget whatever the box is doing in the background. Handle the user’s request now. Anything else is unjustified arrogance. “I’m sorry, Dave, I can’t do that.”
Developers in the consumer space talk about the importance of the “instant-on experience.” Lately PC makers have learned the buzzwords, though few of them seem to take the lessons to heart. One of the great things about the old Nintendo 64 game console what that it booted up immediately. It was playing music and displaying pictures before your finger had even left the power button. In contrast, the newer PlayStation 3 takes almost as long to boot up as a PC, and you have to ask permission to shut it down. That’s not progress.
As embedded developers, we have to remember to put our customers’ needs first, no matter how “incorrect” they might seem to us. User-input response is paramount in that equation. Prioritize tasks or queues to emphasize those aspects of your product that the user can see; background tasks come later, even if they’re the raison d’être for the entire box. Drop network packets; they can always be restarted. Abort disk writes; the disk will spin around again. If you have to double-buffer writes to avoid losing transient data, do it. But never, ever postpone acting on user input for even a few milliseconds. That’s doubly true when the user is frantically trying to cancel the very operation the system is intent on finishing. Aborting the disk write or the network update might be precisely what the user wants; waiting until that operation is complete is counterproductive. Always assume the user knows best, even if he doesn’t.
Yes, users can be bumbling idiots who let their cat walk on the keyboard or who click things by accident. But they also buy your product and pay your salary. It’s possible they might also know a bit more about using the product than its own developers do. Engineers and programmers have often been surprised (or horrified) to see how their creations are used “in the wild.” Customers don’t always follow our expected workflow models and can show a surprising amount of creativity when it comes to getting a program or a gadget to do what they want. As long as they’re not hurting themselves, the product should always obey the First Law of Embedded Systems.