At times it’s seemed a sotto-voce religious war.
One side says that a clean user interface aids productivity. The other side says that, well, quite frankly, a graphic user interface (GUI) is a toy, not meant for serious work.
One side says that command-line work is the only real way to do things; the other makes the accusation of engineers trying to keep things obscure and difficult as a form of job security.
It also depends on whether you’re a hardware engineer or a software engineer. Hardware engineers seem to like wizards and other GUI-oriented forms. Software engineers detest them. But even hardware engineers have taken some time to get to this position. Altera’s first rise to power came on the back of their clear and easy MAX+PLUS software. Synplicity kicked some synthesis booty by allowing a user to get some work done today, rather than waiting until a couple days’ worth of script-writing was complete.
The good news about a well-designed GUI is that you can clearly see what to do and how to do it. With command-line work, it’s all in the engineer’s head; only the initiated get to mumble the right incantations. But a learned mumbler can wield great power, since there are secrets accessible by command line that might not be possible through the GUI.
And therein lies the tension: efficiency of operation and fast learning curve for mainstream use cases (via a GUI) vs. the ultimate in control via arcane commands.
The middle ground has been to allow scriptable tools. This means that the bulk of the tool runs in a GUI, but that a programming model is built such that one can do as much and more in an automated way by writing C or TCL or PERL scripts that access the guts underlying the GUI.
The issue then becomes the quality and completeness of the programming model. Does it allow access to underlying data? All the data? Does it allow the user interface to be manipulated? Can you essentially write a program that looks native using a script?
A reasonably good example is Microsoft Excel. It has an extensive model that covers a huge percentage of what the program is capable of. Over the years, I’ve spent untold hours writing thousands of lines of Visual Basic to create personal accounting programs or to prototype engineering wizards. And, to a good approximation, things can look pretty slick.
You can even make things look reasonable with an ugly programming model, like Microsoft Access. It may feel like a hack, but with enough cursing, you can get things to work. Or not (I identified several confirmed bugs several years ago; I’m assuming they’re still bugs to this day… as a fellow forum-lurker grumbled, fixing bugs is so much less interesting than developing the next edition of Clippie++…).
Things get more challenging when you don’t get a complete programming model of the entire program; that is, when someone tries to decide which bits would and would not be useful to script. You can almost picture the argument taking place between engineering and marketing: “Why spend time creating an event that fires on a view change? Hardly anyone uses views anyway!” “That’s because it’s a new feature and people haven’t figured out how to use it yet. “ “That’s because you’ve done a crappy job marketing the new features. Or maybe it wasn’t a useful feature after all!”
The good thing about scripts is that all kinds of creative, innovative things can be done; by limiting what can be scripted, that creativity is circumscribed to the cleverness of the person defining the model. So if they weren’t creative enough, well, then you can’t be either.
One of the key decisions about what to script and what not to script involves data access. Data is power – actually, information or knowledge is power, but more on that shortly. Limiting access to data can feel like the right thing to do in a competitive environment.
One company clearly in the throes of that kind of decision-making is Springsoft. They’ve just announced their VIA Exchange portal, whereby scripts can be proffered and exchanged to improve the productivity of the users of Springsoft’s popular Verdi debug tool.
Along with this announcement comes their unlocking of more – but not all – of the data underlying their tools. They now have three databases that Verdi plumbs. First and most venerable of them is the FSDB, the “fast signal database.” This has been accessible by their tools for a while.
But, by their definition, the FSDB holds only “data.” It doesn’t hold “knowledge,” which is held in their knowledge database, or KDB. Now, exactly what distinguishes data from knowledge is somewhat tricky. Quoting from a Springsoft backgrounder,
“Data is defined as the ‘representation of facts, concepts, or instructions in a formalized manner suitable for communication, interpretation, or processing.’ Knowledge is defined as ‘expertise and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject’ and as ‘acquaintance with facts, truths, or principles, as from study or investigation.’ For the purposes of describing the chip design and verification flow, SpringSoft defines ‘design knowledge’ as the practical understanding of a design and its behavior.”
Well, except that it doesn’t cover all behavior. Because, as of earlier this year, they included a new behavior database (BDB) that includes information not covered in the KDB, which includes information not covered in the FSDB.
It’s probably a waste of time to argue the merits of what is knowledge and what is data and what is behavior; my guess is that the three files are motivated by issues completely unrelated to that ontological discussion, but rather for completely separate technical reasons. It is the subsequent explanation of the files to users and others that yields such distinctions.
So… never mind. There are three files. Fundamentally, the KDB has design information gleaned at compilation time from the HDL defining the design; the BDB has a further refined version of that information for use with tools like Siloti. The FSDB, on the other hand, is generated by simulation, not compilation.
All of that said, the fact is that this new BDB is not exposed for scripting. It is expected to be in a later phase, date as yet unstated. What’s not clear is whether this keeping of the BDB under wraps is because they simply need more time to do the work, or if providing access would give away secrets of how tools like Siloti work. That’s the downside of opening up the tool viscera: if clever data (or information) makes possible valuable tools, then providing access to that data might be more valuable to competitors than to users.
Of course, Springsoft’s focus is understandably on what’s newly available today and the things that can be done with scripts that can now access the KDB. And the fact that those scripts can be uploaded and shared on the new VIA Exchange site. They view Verdi as having a substantial installed base, such that it can become the hub for a host of scripts and utilities built around it.
The ability to script around Verdi in and of itself isn’t necessarily new, but the addition of KDB access (and the denial of BDB access) is an example that provides some unique visibility into the evolution of a programming model and into who gets access to all the softly-spoken magic spells.