feature article
Subscribe Now

Return News Roundup

Power, Sensors, Clock Trees, Multicore and Compression Algorithms

September, at least in the Northern Hemisphere, is often seen as the real start of the year. Companies are returning from their summer holidays and revving up with new promotional activities and, particularly in even numbered years in Europe, starting to work towards the huge techno-fest that is electronica in Munich in November. Now this may be a very interesting observation, but why is this relevant? Well, in the last few weeks, I have been exposed to a raft of interesting things, many of which would be worth a whole article in their own right, but, given the limitations of space and time, I have decided to bundle together several different stories from across a wide spectrum of electronics.

Power supplies

Much of what is written in the electronics media concentrates on digital chips and their design and manufacture. We are probably as guilty as most in focusing on these areas, but, after designs have been implemented in these ever-more-challenging process nodes, the chips have to go onto a board, and then they require power. Given the huge efforts that go into reducing the power within SoCs, ASICs, and microcontrollers, it is perhaps surprising that getting the power to them as efficiently as possible doesn’t get so much public attention. I recently sat through presentations from companies with power supply products at a number of levels. Clearly, if you are doing power conversion, the energy that the convertor uses is going to be a critical factor. Firstly, higher energy use comes with a cost, and, secondly, the energy wasted appears as heat, which has to be dissipated.

Intersil was talking, amongst other things, about their range of automotive power products. Automotive environments are particularly nasty, and automotive applications cover a huge range of technology. While 12 volts is still the market standard for car batteries, some European car makers are working towards using 48V. At the same time, the electronics, the engine control units, the infotainment systems, and other electronic systems run on “standard” 5V or 3.3V. To match this, Intersil are launching a range of devices. One is a 55V synchronous buck controller (ISL786268), supporting currents from 5 to over 25A, and 12, 24, and 48V. At almost the other end of the scale is a device to manage the battery for the e-call safety system. In Europe all new cars have to be fitted with a system that will call the emergency services after an accident. Clearly this has to be powered by an independent battery, and this battery needs to be kept in peak charge, even after the car has been standing for a long time and the e-call system has been dormant. The ISL78692 is designed to just that task in a 3×3 mm package (pass the magnifying glass).

XP Power is working in a different domain, and they were talking about ac-dc power modules and, in particular, a new family that can be PCB, chassis, or DIN rail mounted. The ECE60 family provides 60W at voltages from 3.3V to 48V, with efficiencies of up to 89%, in a package only 91.4 x 38.1 x 28mm, convection cooled.

XP will be launching other power products before electronica, as will Exar.


Getting timing issues resolved on a single chip can be a problem. So just think of the timing problems for the Internet of Things. Just look at all the different elements, from the things themselves, through hubs, switches, and networks, and the racks of servers, all with their own clocks, and some with multiple clocks. As many of these clocks are supplied by Silicon Labs, the company has developed the Clock Tree Expert, a tool for creating clock trees for Internet infrastructures, from the top to the bottom, including networking, telecoms and data centres.


TSensors Summit is an organisation trying to pull together roadmaps for the trillion sensors that will, in time, form the external interfaces to the IoT, amongst other things. According to the organisation, the number of sensors in consumer applications alone has already risen from 10 million in 2007 to 10 billion in 2014. I rather naively expected the sessions on roadmaps to actually put forward roadmaps that would indicate what different technologies were going to offer to different markets in what timescales. What I hadn’t realised is how early we are in the roadmap process, and, instead, a number of the sessions were on building the roadmaps. Once I wrenched my sense of perspective around, things got interesting and, at times, mind blowing. For example, Ira Feldman, a consultant, looked at the Global Challenges of healthcare, a clean environment, clean energy, and education. For each of these, it is apparent that the use of sensor technology can contribute to meeting the challenge: for hunger, the growth of smart food production, such as improved aquaculture, vertical farming (where you stack huge growing trays above each other to make the best use of limited space), deploying robots in the field (literally in the fields), and other techniques can help to increase the amount of food grown and possibly improve its quality. All these applications are going to rely heavily on sensors. This part of the roadmap is missing the studies that will define the problems more precisely and then look at technologies, including sensors, that can help to solve them. This will also identify gaps in the current technologies that require filling and look at how they can be filled. In other sessions, I came across a new piece of jargon for working on this – Technology Readiness Level (TRL – a new TLA). The levels run from 1 – an area identified for lab research, to 9 – technology actually deployed in real-life applications. Assessing the TRL of a technology allows the developers of a roadmap to get a more realistic time scale for when it can actually be useful.

The same analysis process is going ahead, not just for the other Global Challenges, but also for industrial, automotive and consumer market places.

I will be tracking TSensors Summit and reporting back on it in more depth in due course.


Multicore wasn’t invented by Intel in 2006 – it has been achievable since INMOS launched the transputer in 1984 (although we called it parallel processing – looking at the activity rather than the objects that were needed to achieve it), and it was certainly discussed for years before that. Despite this, there is still ferocious debate over architectures, programming models, and languages, as well as implementation details. However, last month’s Multicore Challenge in Bristol was less ferocious than many, perhaps because many of the speakers were – either directly or indirectly – ex-INMOS. (Declaration of interest – that covers me as well).

The discussion has moved on in some areas: multicore is now heterogeneous rather than homogeneous, for example. Your cell phone has multiple processors, some with conventional architectures, others labelled as Graphics Processing Units.  Imagination Technologies, seen very much as a GPU IP company, changed the game when it bought MIPS, and it used the meeting to describe the new i6400 core (or family of cores) with multithreading and virtualisation. The cores can also be used in multiples in SoCs and ASICs. Jim Turley reviewed it a few weeks ago. XMOS, Infineon and Huawie also discussed new products. None of them, however, gave marketing presentations, but, instead, they used their products as examples of particular approaches to architecture for multicore.

But David May (the implementer of the transputer, Professor of Computing at Bristol University and founder of XMOS) tried to bring us back to some of the basic issues in a keynote. He argued that we are working in a wide range of different computing areas, but we are still mainly reliant on an architecture that was developed to run the desktop computer. It is time to go back to a simpler model, both of architectures and of the tools to use them. To gain the processing power that the large-scale computers are going to need will also require an investment in development tools and in training users to develop efficient working practices. Today’s inefficient approaches to programming are not going to be good enough.

This same message came up sort of in reverse in the closing panel discussion where the incredible complexity of many things is a barrier to creativity. My favourite quote of the day is from May: “We have focused a huge amount of investment on chasing Moore’s Law. If a tiny fraction of that money had been spent on looking at software and algorithms, one would have thought we would have made a lot more progress.”

Compression algorithms

One of things that are interesting about Europe is the number of small companies who are getting on with things that are important but who don’t make a huge song and dance. (As opposed to some big companies who make a huge song and dance about announcements that are, frankly, insignificant.)

One I recently met is Westwood Rock. This a hardware and software design consultancy with a bunch of bright people and a track record of successful projects, only some of which they can talk about; others are still covered by confidentially agreements. With the Internet, they all work from home offices, using Skype and the occasional face to face. This removes a huge overhead from their business.

The reason for meeting them was that they have developed IP for the silicon implementation of a video compression standard. So what?

Well, the standard is unusual. It was developed by the BBC (British Broadcasting Corporation – the UK’s broadcaster and a technology innovator for nearly 100 years) to cope with shipping the vast quantities of data that make up a high resolution TV signal, both around the studios and from outside broadcasts. The end result is Dirac, a video compression system that was built with the objectives of high quality images and low latency in a low form factor, and which is also standardised, non-proprietary and royalty free.  It is recognised as SMPTE standard ST 2042-1 2012 and is also called VC-2.

It achieves a compression factor of 8 times – so the signals for HD-SDI television, which run at 1.5Gbit/s, can be carried over a 270Mbit/s link, and, at the same time, a standard monitor can be used to see the picture in real-time, albeit at low-level.

A broadcaster may compress and decompress a signal for up to six to eight generations, as recording, storage, editing and transmission take place, so Dirac is tuned for minimum loss of quality

Dirac is designed to be as future-proof as possible, so it is not tied to a specific picture format. At the same time, the latency is at worst a single frame and may be only a few lines.

Westwood Rock has taken this standard and has implemented a codec in hardware. The IP has been developed under a rigorous process, so it can be certified for high reliability applications. The implementation is also deterministic, again helping it to achieve. It has been implemented in an FPGA in a relatively small number of gates and with no need for extra memory and is also available as IP for ASIC or SoC implementations, although Westwood Rock could carry out a complete design for a customer, if required.

OK, not earth-shattering, but still a nice piece of engineering, the sort of thing that is a pleasure to learn about and share.

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Achieve Greater Design Flexibility and Reduce Costs with Chiplets

Sponsored by Keysight

Chiplets are a new way to build a system-on-chips (SoCs) to improve yields and reduce costs. It partitions the chip into discrete elements and connects them with a standardized interface, enabling designers to meet performance, efficiency, power, size, and cost challenges in the 5 / 6G, artificial intelligence (AI), and virtual reality (VR) era. This white paper will discuss the shift to chiplet adoption and Keysight EDA's implementation of the communication standard (UCIe) into the Keysight Advanced Design System (ADS).

Dive into the technical details – download now.

featured chalk talk

Electrical Connectors for Hermetically Sealed Applications
Sponsored by Mouser Electronics and Bel
Many hermetic chambers today require electrical pathways to provide internal equipment with power, data or signals, or to receive data and signals from equipment within the chamber. In this episode of Chalk Talk, Amelia Dalton and Brad Taras from Cinch Connectivity Solutions explore the role that seals and connectors play in the performance of hermetic chambers. They examine the methodologies to determine hermetic seal leaks, the benefits of epoxy hermetic seals, and how Cinch Connectivity’s epoxy-based seals and hermetic connectors can add value to your next design.
Aug 22, 2023