For aspiring historians, a topic that regularly causes concern is that of history and memory: surely first-hand eye-witness reports of an event have to carry greater evidential weight than documents? But documents are static, and human memory can be fragile: it can change and evolve. Take a fisherman who catches a big fish – as he tells the story over the years, the fish gets bigger and the fight longer.
Lord, grant that I may catch a
fish so big that even I,
when speaking of it afterwards,
may have no cause to lie.
But the discrepancy between memory and documents sometimes works the other way when trade show and conference organisers publish their attendance figures. Exhibition halls remembered as so deserted that tumbleweed was rolling through the aisles, in the organisers’ figures were really packed wall to wall. One show in the autumn of 2009 was a particular mismatch between memory and documents. To be kind, I was there on only one day, but it still takes a very serious leap of imagination to people the hall with the published numbers.
Late in 2009, I was at the Grenoble IP event, organised by Design and Reuse. In clearing my overflowing email in-tray last week I unearthed the published attendance figures, and they were actually very interesting. The event had extended to three days, with the third day moving into the embedded space, and the attendance was around 500 over three days instead of 400 over two days. And this feels about right, matching my memory of the session audiences and the lines of people at the tea, coffee and lunch points. While this is not a huge number, it is still more than holding its own against recession. But what is even more interesting is the organiser’s analysis of three sets of figures: attendance, panel speakers, and the presenters of technical papers on design with IP.
Audience attendance was a third from France, about 6% from Asia and 5% from the Americas. The panel sessions were dominated by the Americas (actually the US) with 33% from there, and only 4% from Asia. But the design papers show a very different story, with 45% of the papers from Asia and only 9% from the Americas. And, according to a conversation at the show, the number of attendees and paper presenters from India was artificially low, following visa problems. So look east for the future. (Or west across the Pacific – if you are standing on the West Coast of the US.)
In 2008, the IP Conference took place in a world that appeared to be falling apart financially, and the mood reflected this. In 2009, the atmosphere was more upbeat – perhaps cautiously optimistic would be the best description. And coupled with this was a lack of smoke, mirrors and hyperbole that could reflect a maturing IP industry.
Once again ARM is seeing over a billion units shipped each quarter, and Eric Schorn from ARM, who gave the opening keynote address, today sees the IP market as composed of more than 400 suppliers with over 6,500 components. Revenue is running at over $1.5 billion, and he predicts that the IP market is growing faster than the overall semiconductor business as the role of IP in the design process increasingly grows, particularly as companies come under even greater pressure to reduce costs yet get products into the market faster. In fact (history alert) he traces the roots of the growth of IP to Adam Smith’s Wealth of Nations published in 1776, which described the division of labour.
Division of labour explains the separation of design and manufacture, with the growth of fab-lite and fabless companies concentrating on design, leaving process and manufacturing to specialist foundries. (In itself a topic for a very interesting panel at the conference.) Extending Schorn’s metaphor, just as it is now very rare for a design company to design a new processor for an SoC, it is increasingly likely that they will rely on IP to bring things like high speed communications into a design. Well-designed and well-supported IP means the design companies do not have to hire in someone with specific application skills that are peripheral to the central role of the design (or even attempt to master them in-house).
To return to Schorn, he freely shared what he sees as the reasons for ARM’s success, providing advice to other IP providers as ten top tips. (The full presentation is on the web at http://www.design-reuse.com/webinar/intro/arm). I don’t think it is unfair to say that there were no earth-shaking revelations, but there was a lot of common sense, including reaching out beyond the customer to the customer’s customer, creating an eco-system of partners, customers and other third parties, creating an appropriate business model, etc., all based on a thorough understanding of the business.
The bulk of the conference was aimed more towards the chip designer and user of IP. The threads that emerged were in themselves interesting in showing how IP is evolving.
In-house re-use of RTL is still a strong way of achieving the next generation of products, but, since the RTL was initially designed for a particular design, it is only proven-in-use for that design. Tweaking it for another design can be very time consuming. Figures from Numetrics are rather devastating – sometimes it can be almost as time- and effort-consuming to re-use as it is to design from scratch, particularly if you are using only 60 per cent or less of the RTL. One of the issues is the time taken for verification, testing etc. Kathryn Kranen of Jasper argues strongly that the use of formal techniques can provide detailed insight into existing RTL, so that making changes and then verifying those changes can be much easier.
An interesting new area is not the functional IP that we normally think of, but infrastructure IP. Issues such as BIST (built-in self test), repair, diagnostics, and yield enhancement tools are all now integral to the SoC. Yervant Zorian of Virage Logic (which sells IP for on-chip test and repair of memories) sees not only these growing for the entire device, but also sees the new area of physical IP, such as RF, biochip, MEMS and optical connectivity, requiring its own functional IP.
If you are building a large SoC, using a number of IP blocks and interfacing them efficiently and with the minimum of design integration and test effort is still a significant challenge. Consortia like Spirit (now merging with Accellera) are working on standards like IP-XACT. There is also industry discussion about using bus-like connectivity, where each IP block has a standard bus interface and communicates across the bus with the other blocks. (ARM’s AMBA already provides this, but it is seen primarily as a way of connecting a processor with other elements.) While this is clearly not always going to lead to efficient use of silicon real-estate, the potential for faster time-to-market, through reduced design and simplified testing, makes this a very attractive path. (If you are at DATE in Dresden in March, IC Design and Verification editor, Bryon Moyer, is chairing a session, Are we there yet? Has System Assembly from IP Blocks Become Like Connecting LEGO Blocks?)
Buses carry us to another area of interest, that of multiple processors and multi-core processors. For some time now, SoCs and ASICs implementations have used multiple heterogeneous processors. While these are not simple to program, the different roles that each processor plays makes it relatively simple to program each for a specific task. Communication between processors is determined by the requirements of the information flow and can be planned in advance to run efficiently.
Two factors are complicating this issue: the increased complexity of software, where the use of an operating system becomes necessary, and the use of homogeneous multi-core devices. (As an aside, Colin Wall of Mentor, in discussing multi-OS on multicore systems, was delighted to see that the US Department of Defense has ordered 2,200 Play Stations (PS3s). A PS3 provides 60% of the power of an IBM device at 10% of the cost. They have already proved the concept with a 336 PS3 cluster.)
Both of these factors contribute to the way in which both developing and debugging software is going to get even more complicated, even more time consuming, and even more important in the market acceptance of the end product.
The conference covered a wide range of other material, and much of it is on-line. There seems to be a general acceptance that IP quality is high and still improving, and the relationship between the IP provider and the user is continuing to improve, with a high level of trust.
The message that I came away with, or rather the re-enforcement of a message which is widely known, is that, while life is going to continue to be challenging, IP can be expected to become even more useful. With the move to even more demanding process nodes, a single design re-spin has the potential to wipe out the lifetime profit on a device, through increased NRE costs and missed market opportunities. This means that future generations of ASICs and SoCs are going to have a very high content of IP and reused RTL (and software). The big question is where the developers are going to get the tools that will give them confidence that both hardware and software will perform in service as the specifiers intended.