Want to scare an engineer? There’s an easy weapon out there. And it consists of only one word.
Process is supposed to mean that a company has a formula, that they have a way of doing things that works, and that it’s repeatable, and – most importantly – that it’s a feature of the company, not some individual that works there. That means the process survives even when key people are no longer working there.
Looked at in this light, process brings order to chaos, keeps old problems from happening over and over again and, hopefully, minimizes the chances of new problems cropping up. All of which sounds pretty good.
But what does that mean to many of us? Well, there are degrees – some might call them degrees of lack-of-freedom, but at the very least, it means someone has to track the process and follow it. It probably also means some reporting. In the worst cases (I’ve been there), the process takes over, calcifies (because the process of changing the process is too onerous), and so things grind to a halt, with everyone afraid of catching the attention of the “docu-nazis.”
Looked at in that light, process can be a scary thing indeed.
No industries demand process more than military and aerospace (and anything else “mission-critical”). And that’s for obvious reasons; you can’t have lone wolves and loose cannons putting lives at risk. So things like full traceability for software and hardware design are required, even if the process of watching the process is somewhat vague and byzantine. Accountability and transparency do a lot to keep things running smoothly, but they require extra work, and they’re pretty much never popular.
But here’s the problem, at least as Satin Technologies, a small tools company, sees it: the process of taking an idea into a modern SoC is enormously complex. Between the digital and analog design, the new circuits and the IP, the hardware and the software, the TSVs and the interposers, all of which have to come together and work, well, you can’t simply roll day to day and hope things work out. You have to know that you’ve thought through how you’re going to do it and that it will work. You don’t have time to mess around: no one is taking pity on the fact that your project is a bazillion times more complicated than a similar project might have been 15 years ago. Your tears will go uncollected and unwiped; just man up and do this project faster than the last one.
Which means one thing: process. And not just any process; an extraordinarily complicated process involving far-flung architecture, design, IP, verification, software, and packaging teams – to name a few. And somehow, some way, that process has to be tracked. It would sure be nice if that could be done automatically.
With some industries, when a process has to be tracked, it gets standardized so that you have something to hang your hat on when you try to automate process tracking. We’ve gotten a bit more wishy-washy about that; modern standards seem to be more about requiring you to come up with a process that meets some loosely-defined goals (spawning an entire industry of consultants whose existence depends on regular Jos and Joes not being able to figure out what the heck that standard means). Even ISO-9000 had that feel: if you remembered nothing else from it, you remembered “Say what you do, do what you say.” Only in a lot more words than that.
The reason this happens, of course, is that no one wants to tie down their process. And for great reasons. For one, if a company has a great process that no one else has, that company isn’t going to want to simply hand it over to the competition. And it’s not going to want to have to abandon it in order to join some inferior (at least in their opinion) standard. Additionally, in a competitive, dynamic marketplace, things change, and you have to respond quickly, and the process of process change can hinder that.
So, valiant attempts aside, here we remain with no established process standards for building SoCs. And none are likely to be attempted anytime in the visible future. So what does that mean for tracking to the individual processes?
It typically means writing reports or status blurbs for some. It may mean maintaining any of a number of checklists. EDA tools can be automated, so some reporting can be done by managing scripts. There may also be documents here and there that carry other critical information. While the process itself may be solid, this makes the process of reporting on the process much more brittle.
Which is why Satin is launching a product called SatinTech MS that is intended to automate the tracking process. Not one specific process, but any process. Which is a pretty tall order, since this makes it something of a meta-process and requires some abstraction.
But a process can be thought of as more or less a series of one of the following questions:
- Did a particular step get completed? (For instance, a software build.)
- Was a key metric achieved? (For instance, a key operating frequency or power level.)
- What is the value of some other metric? (For instance, the number of bugs outstanding.)
You can even extend this to include launch and early market activity: press release being issued (question 1); number of “Likes” on a Facebook post (question 2, although that one’s just sad and desperate); number of support cases opened by new customers (question 3).
These questions fall into one of two categories in Satin’s worldview: verdicts and values. Verdicts have a pass/fail criterion, while values are, well, simply that: values. And their tool is oriented around documenting these various steps and outfitting the tool with ways of answering the appropriate question for each step.
This happens in a couple of steps. The first is to capture the process. If the process is well defined and accepted, then this should be a one-and-done (for a while) event. Once captured, it’s accessible to anyone that has permission to use (but not change) it.
Then comes the trickier bit (at least in my view): automatically gathering all of the required data for tracking and reporting. For this, Satin uses what they call data “sensors.” (Not to be confused with data censors, which are an entirely different beast – and one that can lurk within the process monitoring milieu.) These are pointers to data that may reside in any number of far-flung documents. They may copy text from a Word file or a spreadsheet; they may analyze the results of a particular synthesis run; they may query the bug database.
At their deepest level, these sensors do involve chains of mini-scripts that can be cobbled together into something more sophisticated. But the key is that they’re all housed in this one system that can bring it all together, making the sum total of the information available via the third major element of the tool: the dashboard. Or, more generally, a means of being able to quickly and easily have a look at the data in one place, regardless of where it comes from.
Perhaps to the delight of engineers, this is supposed to take on much of the grunt work of putting status reports together – in the best of worlds, the reports aren’t even needed anymore because the various stakeholders can simply open their own dashboard views to see what’s up without pestering and distracting an engineer.
As enticing as this all sounds, the proof will be in the pudding. Attempts to corral process have come and gone before (“Framework!” There… a couple of EDA old-timers just passed out at the mere mention of that word…), and handing too much power over to the process mavens is always viewed with suspicion.
But if this delivers on the promise of freeing everyone up to get back to business and letting the tracking and reporting take care of itself, then you can easily imagine it getting lots of traction.