We talk a lot in these pages about programmable this and programmable that. In our efforts to make slivers of silicon do our increasingly complex bidding, we need some way to communicate our intent to our chips and to incite them to behave accordingly. In our happy little engineering silos, of course, we separate all these types of “programming” out into various disciplines – firmware, middleware, OS, application software, drivers, FPGA fabric, analog configurations, transceiver settings… The list goes on and on.
All of this “programming” ultimately determines the behavior of our device. We create tribes of “engineers like us” who specialize in one flavor of programming or another, adorn ourselves with tools and tricks and folklore that enable us to get some reasonable results in our chosen area, and mostly ignore the neighboring disciplines.
As Moore’s Law marches forward, the hardware portion of what we’re programming is increasingly often completed before we get our chip, and it has been integrated onto a single device with the hardware to support all the other types of programming. The latest SoC devices include diverse arrays of hardware – processors, peripherals, memory, analog circuits, power and system monitoring hardware, specialized accelerators — just about anything that we might want to use in our application is thrown into the SoC. As silicon real estate has become increasingly inexpensive and the cost of creating a chip has continued to rise exponentially, the trend among chip makers is to build everything you might possibly need into one device and let you sort out which parts will be doing what by programming that device later.
As FPGAs have become increasingly sophisticated and capable, they have also become less and less dissimilar from other types of SoCs. If you take away the FPGA fabric, some modern FPGAs look pretty much like any other SoC you could buy. You’ve got multiple high-performance processors, various types of memory, versatile bus structures, configurable IO, a variety of peripherals, accelerators for specialized tasks like audio and video, analog for doing functions like control, timers and controllers and built-in power supplies… Layering the LUTS into the mix just adds another dimension of programmability.
If we follow these trend lines out toward the horizon (and we don’t have to follow them very far these days), we see that most of electronic system design is programming. Since most of our system is on one chip, we don’t spend much time choosing all the various chips that go on our board anymore. Similarly, with only one SoC doing most of the work, we don’t have to spend as much time on board design and layout (although the board design we now have to do has become quite a bit more complicated owing to huge pin counts and crazy data speeds).
Still, the engineers making a system from a current or future SoC will do most of their design work essentially programming their SoC using a massive array of software tools. This tool collection has been gathering for a while, of course, in all of our tribal engineering silos. For embedded software and firmware development, we have a well-worn set of standard tools – most of them open source – that get us from main() to working, tested code. For FPGA design, the various vendors and their third-party partners have generously supplied us with a mostly-suitable set of design tools that can at least get the LEDs on our development boards blinking in the order we desire. As our needs have broadened, so has our tool chest. Today’s SoCs always come with at least some set of supporting tools, and often those tools are quite sophisticated.
The FPGA companies seem to be most on top of this all-tools-for-all-things trend. Altera, Lattice, and Xilinx all supply tool suites with impressive lists of capabilities – embedded software development and debug, DSP algorithm design and acceleration, signal integrity assurance and monitoring, power analysis and optimization, system-level design, IP integration… Oh yeah –and even FPGA design. These increasingly powerful and integrated tool suites cover the duties of most of our design teams and tend to consolidate our skills. It’s easier to dip over from hardware into the software side just a bit if you and the software folks are using the same tools.
IP is also becoming increasingly integrated. In the simple olden days, IP blocks arrived as a few thousand lines of VHDL or Verilog that you could synthesize into your design. If you were lucky, a helpful IP supplier might even include some vectors or test programs to help you figure out if the thing was doing what it was supposed to do in your design. Today, IP isn’t really worth its salt if it doesn’t include the whole software stack, appropriate drivers, metadata, and even a sample reference design or application that lets you bring up your new widget without writing a single original line of code.
Of course, all of these highly-integrated point-and-click, drag-and-drop, plug-and-play, IP and tool solutions combined with mega-integrated SoC platforms create a potential problem for engineering as a career. With all of this raising of design abstraction, a lot of powerful systems that would previously have required a high level of expertise to engineer can now be slapped together in an afternoon by any halfway-competent technician using a not-too-expensive tool suite. You don’t need a top-flight engineer to drag and drop a few IP blocks and push the “GO” button.
So, where does our engineering expertise go in this brave new world where everything is programmable and tools shoulder an increasing share of the engineering workload? Well, we will always need engineers doing the hard-core low-level stuff. If the dudes that understood PCI stopped back when we had 33MHz PCI IP for our FPGAs, we’d be in a pretty sad situation today. As progress marches forward, we’ll need increasingly specialized people designing the IP components that everybody else uses to realize their system designs.
We’ll also need more and better tool developers. The EDA industry is a mess today, and the revolution in SoCs has only put commercial EDA in a more precarious position. If they’re not careful, EDA may find that they have created the tools that engineered their own demise. Regardless of that, however, there will be a need for insanely-complicated design tools and for engineers with the skills to create them. Whether those tools come from independent EDA companies or from in-house development teams at SoC companies, they’ll require a sustained monumental engineering effort to keep them up with the demands of both their customers and their silicon platforms.
Between those root-level engineers and high-level system engineering, there will likely be a gap caused by programmability. Simple systems will be designable by almost anyone. Complex systems will be designed by people with extreme domain-specific knowledge. Between those two, well, hand a design kit to your middle-school kids and let them go to town.
In short, engineers will have to adjust their contributions to the times and to the technology if they want to continue their careers. But then again, that’s the way engineering has always worked.