In many primary school cafeterias, they serve students out of plates or trays with little sub-divisions for each part of the meal – a big one for the protein, smaller ones for the vegetable and starch. If you can, suppress your gag reflex for long enough to go back and place yourself there, at the school cafeteria table, looking at your meal. Perhaps the subliminal influence of those partitions instilled a compulsion to keep those parts from touching. Many a kid has emerged with the credo that no mashed potatoes should ever come in contact with the peas.
Fast forward that kid a few decades and he’s midway through his embedded design career. He’s putting down hardware – processors, busses, memory, and peripherals. Then there will be middleware – drivers and such. Then there is (maybe) an operating system or RTOS, and finally application software. One of the main design goals for each of these elements is to minimize contact with the others – keeping it nicely in its partition where no gravy will get on the green beans. This practice has served us well for decades and will continue into the future.
The benefits of this compartmentalization or encapsulation is that different people or teams can work on each element in relative isolation, and changes to one require minimal changes to the others. The downside is that it creates the potential for massive finger-pointing:
“Your hardware doesn’t work.”
“Yes it does.”
“No it doesn’t.”
“It’s your software that’s broken.”
“No it isn’t.”
See? We’re not that far from the elementary school after all.
The problem of verifying that the hardware works – separate from any software, middleware, outerwear, or underwear, goes from cradle to grave in our product life cycle. At initial hardware prototype bring-up, we may not have any software yet to run. We have to find another way to confirm that our UARTs are UARTing, our memory managers are managing memory, and our busses are careening through traffic. Even if there were software available, we probably wouldn’t trust it. If the system came up dead, the result would likely be the conversation above.
Move on to the deployment of prototypes to software developers for embedded development. As we all have experienced, development prototypes are not the most stable pieces of hardware on Earth. We need some way to be sure that each prototype is actually working correctly – before we let the programmers in to have their way. Without that, more finger-pointing. Move the product into manufacturing, and we need some built-in self test that can verify that the hardware is working off the assembly line. Finally, our field service teams need some way to diagnose problems with shipped units.
All of these problems cry out for a common solution – a tight set of tests that can run on our bare-metal hardware and that can verify that everything is doing what it is supposed to do. We could run these tests during prototype development and keep them in the flow all the way through manufacturing, deployment, and service.
We could try to write these tests ourselves, of course. We’d need to know enough about the underlying components we were testing to write code that gave great coverage. We’d also want to make our tests modular so we could re-use the appropriate parts for our next design project. We’d want the tests to be small and compact so we could leave them in the finished product as a “self-test” mode without wasting a ton of expensive resources like non-volatile memory. Creating these tests would be a giant project all in itself, in fact. We’d probably need to hire more people to get it done in our project schedule. Then, we’d have the problem of unproven tests testing unproven hardware. When something goes wrong – is it a hardware problem, or a problem with the newly-written test? We could get a 50/50 resolution on that one.
One thing we might observe here is that we are not the first to use the components that are in our embedded system. Our particular configuration of processors and peripherals may be unique to our product, but each and every one of them has been used and tested before – many times. Wouldn’t it be great if somebody had already written a set of modular tests that met our requirements, thoroughly tested the underlying hardware components, gave us detailed feedback, and had a long track record – so we’d know that a failure meant something was wrong with our hardware and not with the test?
That’s the observation that Joseph Skazinski made when he co-founded Kozio back in 2003. He had spent a career in the development and deployment of complex embedded systems, and he saw a gaping hole in the area of built-in diagnostics. Over the next seven years, Kozio has developed and sold a robust set of modular tests that can quickly be re-configured to support any embedded system, providing a reliable self-test capability that can be used throughout the product life cycle.
At the core of Kozio’s offerings is a suite called kDiagnostics. kDiagnostics is a binary application that runs from your embedded system’s processor. It runs on your hardware, executing tests from a test library that has been personalized for your design. The set of tests are user-controllable, and they include comprehensive tests for every sub-system of your design — including processor, memories, peripherals, storage devices, IO channels, displays, and audio channels. You tell Kozio what’s in your system, and, within a couple of weeks, they deliver to you a personalized test suite for your design. Drop in the tests, power up, and you have results.
There are a number of good things about getting commercially-developed tests, rather than creating your own. First, as we mentioned above, you aren’t trying to debug unproven hardware with unproven tests. Second, the people “productizing” a test suite will do a much nicer job implementing controls and reporting than you will. You don’t have time to put all the “chrome” on your test suite at the same time your team is under schedule pressure to get a product out the door, but some of those nice UI considerations become really important when you start deploying the test suite to places like manufacturing and field service. The tests themselves are end-applications used by your manufacturing and service personnel, and the robustness of those applications will have a profound effect on the efficiency of people doing those jobs.
Kozio packages their products in a variety of bundles, depending on your product needs. Recently, they have added a PC-based accessory application called “Validation Assistant” that provides control and visibility of the test system from a host. Validation Assistant is a perfect example of a super-useful utility you wouldn’t have time to write for yourself. Validation Assistant can be connected to your board via just about any network connection – serial, USB, Ethernet, etc. and can control and report on all of your kDiagnostics tests. You can use it to batch up and package test protocols for various purposes – from a “hello world” manufacturing test to detailed service diagnostics.
Using a product like Kozio’s may be challenging for those severely afflicted with not-invented-here syndrome. In any area of our design, it’s easy to say, “I could just do that part myself.” However, no matter how good you are at test generation, it will take a lot of engineering time to provide your project with the capabilities that are encapsulated in Kozio’s million-plus lines of code. Of course, you could skip bare-metal testing altogether and rely on the operating system to tell you what’s wrong with your custom hardware. Yeah. Good luck with that.