Everyone likes to think their tool is intuitive.
Wait, let me restate that.
Everyone wants their customers and prospects to believe that their tool is intuitive. Whether it actually is or not doesn’t matter if everyone believes it is. Welcome to marketing.
The intuitiveness wars play out most visibly in the desktop space, with one camp claiming exclusive rights to intuitiveness. However, I have sat down many times in front of a Mac and not had a clue as to how to proceed next. I was able to learn quickly once someone explained it, but it wasn’t intuitive. And does anyone wish to try to convince me that the following iPod operations are intuitive? To wit: “to play music, press the play button. To stop playing music, press the play button for longer.” I don’t think so.
In reality, each platform has a paradigm; once you’ve got the basics of the paradigm down, it becomes much easier to guess what to do. But fundamentally, each is confusing to the other. It’s like religion: each is sure it’s right, and each is sure the other is wrong. End of discussion.
So who decides how an interface should work? Who decides what can pass for “intuitive”? Each company has people that are paid to figure this out. That they come up with different answers suggests that it’s not obvious. Of course, just spending money on the problem is also no guarantee of a good solution. One rather large software company is reputed to spend lots on usability studies, psychology studies, physiological studies, all in an attempt to understand how better to blend the computer to the human condition.
But then that same company can then turn around and take a rather popular set of office productivity products that are used the world around by countless people that have invested countless hours in learning how to use them and completely change the user interface for no apparent reason. It would be fine if it made things “more intuitive,” but, in fact, you now end up going to Help all the time to figure out how to do the simple common thing you used to do easily. It would be fine if it reduced the number of clicks to do something, but it did the opposite (why use one click when three will do?). Some features completely disappeared (granted, many features of dubious utility were added in the bargain).
So even the best and the brightest (or at least the richest) seem to have a hard time grasping the notion of intuitiveness and ease of use.
Life gets even harder in the EDA world. You no longer have millions of users. You have somewhere between hundreds and tens of thousands, typically (not including Excel as an EDA tool). That might sound like you have fewer people to please, making it easier, but, in fact, you have less opportunity to view usage on a grand scale, meaning that the decisions you make rely on data that’s inherently noisier. Assuming you make decisions using data.
Then there’s the rapid lifecycle of EDA products. There have been times in the past where I’ve used some tool and kept copious notes of what could be improved, sending them in at the end. Not as a rant or a bitchfest, but as a well-intentioned set of specific inputs that would, hopefully, be useful. But some companies simply provide no way to send in such information. Others do, but don’t have a clean way to handle it. And even those companies with a robust process that can take in random user input have another problem: they have to prioritize bug fixes and new features. New ideas for making a tool more usable inevitably fall into the dreaded “enhancement” bog, from which they seldom re-emerge.
So in EDA, usability faces countless hurdles. And if this weren’t enough, engineers end up investing time to learn their tools and so become accustomed to quirky interfaces. This can make them resistant to changes, since so many upgrades simply swap one quirk for another and, regardless, have to be learned. And learning is one thing we don’t have much time for.
At this point you would think it useless to spend much energy on improving a user interface. But the earnest idealists amongst us users still think it would be cool if someone from our tool vendor could sit next to us and see the goofy crap we go through to get something done. And the earnest idealists amongst us marketers still think it would be cool if we could stand behind a user and watch his or her every click to see the goofy crap they go through that we could fix.
There have been attempts in the past to get information about usage patterns – for example, in the FPGA tool world – but those sort of blew up out of concern that the companies might be spying on the designers. And this brings up yet another barrier, as if we didn’t have enough already.
We live in a world where we can do more than ever before thanks to communication and connectivity. But we also become more vulnerable as our every move becomes visible to some invisible someone out there. And this mysterious internet thing that facilitates this visibility also allows control in quiet and potentially insidious ways that we may not even be aware of. Having someone tap into the design of your project could raise all kinds of concerns and paranoia, not all of it unjustified.
Is someone peeking at my design, learning things they shouldn’t be learning? Is someone monitoring how many hours I’m working? Is someone looking to see when I stop working so they can send a salesguy in to pester me? There’s a whole can of worms that gets opened up when you say you want to watch how someone designs, and the worst thing you can do is try to do it surreptitiously.
So it was surprising to hear that Cadence has a Metrics Initiative where their goal is specifically to get lots of user data to help them to improve the tools. A conversation with their John Stabenow put a bit more context around their approach to this minefield.
First of all, it’s a selective mutual program; not everyone participates, and anyone that does has an explicit agreement in place. In fact, there aren’t a lot of companies participating, and those that do are generally larger companies that have lots of users that can generate lots of data.
Second, no one is watching “in real time,” although that’s something of a nuance. The tools create log files that track the behavior, mouse click by mouse click. These log files are then mined for data by Cadence. In fact, no new data is created: the log files pre-date this program. It’s just that those files are now mined. Apparently some customer companies are even doing their own mining of the log files.
Third, only operations are logged. The content of the design files is never logged, so the actual critical company intellectual property is not shared.
This started as a special program for some companies to help them check out the stability of new versions, and it morphed into a bigger usability-enhancing idea from there. Engagements are long – twelve to eighteen months – and no money changes hands.
This seems to come as close as I’ve seen to having someone stand over your shoulder watching your frustration. (Or, to be fair, your glee.) And it seems to have sidestepped many of the gotchas that have plagued prior attempts at getting this kind of valuable information.
Of course, it will work only if talented people view the results and make smart tradeoffs in the development of the next generation of tools. Cadence users will ultimately have to decide whether all that work resulted in something that was truly more intuitive than the alternatives.