feature article
Subscribe Now

Just a Second

What Is a Second and How Do You Measure It?

It’s a bit like the chicken and the egg question. Do we improve accuracy in time-keeping in response to the needs of a new technology, or do we get new technologies because we can be more accurate in measuring time?

Early rural societies didn’t need accuracy much greater than morning, afternoon, dinnertime, etc. As things got more sophisticated, accuracy became more important. Urban societies required more co-ordination, and so public clocks, often with bells to toll the hour and later the quarter hour, were set up. Long sea journeys, particularly driven by the commercial and military needs of North America, drove the improvement in chronometers, where accuracy of ± 2 seconds a day was sufficient to avoid shipwreck.

Railways drove changes in two major elements in time: a shared timescale and synchronicity. (These are still issues today.) Before railways, communities used local time; when the sun was at its height, it was noon. This was fine until train services started, and then you would turn up at the station to find the train was running to London time, using the guard’s watch, and had already left. What was worse, two trains could be running to different times on the same stretch of tracks, and, in 1853, in New England, 14 people were killed because of this. If you were going to use “Railway Time”, then you needed to synchronise the clocks along the line. Fortunately the electric telegraph provided a transmission method to do this, over very long distances.  Of course, it was the long distances that caused the next problem. When railway time differed from local time by only minutes, this was fine, but when the difference moved into hours, as it could across Europe, and even more across the growing United States, it became a problem. In 1884, a congress in Washington established the longitude of the British Royal Observatory at Greenwich, to the east of London, as a meridian or base line and effectively established time zones, based on positive and negative hourly offsets to Greenwich Mean Time (GMT), and, where they reached each other in the Pacific, they named that the International Date Line. (The French took a few more years to agree and to abandon the Paris meridian.)

There was already agreement that the day had 24 hours, each of 60 minutes, themselves each of 60 seconds. The second was therefore defined (in 1000 CE) by a Persian scholar as 1/86,400 (24×60×60) of a mean solar day, and the day remained the basis for definition until 1967. The problem is that the earth does not rotate consistently: occasionally the speed of rotation accelerates but generally it is gradually slowing down. In the 1930s, quartz-crystal-based sources gave a consistent time source, but, in 1955, at Britain’s National Physical Laboratory (NPL), a caesium-based atomic clock provided the basis for an objective and repeatable measurement of a second.

If an electron in an atom is stimulated by an external source of electromagnetic radiation, it can move to a higher energy level. When it falls back to original energy state, it releases the energy as radiation with a specific frequency. This frequency is always the same for a specific atom (e.g. caesium 133). So we can use this as a way to measure a second. It was determined that the SI unit s (second) is “the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.”

The first caesium clock was accurate to ± 1 second in 300 years. NPL and other centres around the world now routinely use caesium fountains as clocks. These are accurate to ± 1 second in 60 million years, but are big beasts that require very careful calibration. A strontium ion atomic clock is now better than ± 1 second in 15 billion years: the earth is thought to be about 5 billion years old.

Alongside the drive to greater and greater accuracy (including putting clocks in space) is a drive to smaller and cheaper atomic clocks. Chip scale atomic clocks are now available, and rack-mounted atomic clocks are widespread.

But why are we pushing for this greater accuracy? In short, for the same reason as the railway companies, except that we are working with electronic communication systems.

The Internet relies on timing to transmit messages without collisions. GNSS (Global Navigation Satellite Systems) uses timing to provide location information. Markets, such as the world stock exchanges, time-stamp their deals. And all these operations require very precise, fraction-of-a-second, synchronised timing.  As an example of this last: on June 3rd, 2013, Thomson Reuters published figures of US manufacturing output 15 milliseconds early. The data became available to subscribers through other channels at the correct time of 10.00 am, so those subscribing to Thomson Reuters had a window of advantage. Thomson Reuters blamed a clock synchronisation problem for the early release of the data. In those 15 milliseconds there were trades worth £28 million. And each of those trades would have been time-stamped by both partners.

While this was an error, Thomson Reuters were, at the same time, striking deals with subscribers to their news services to get market information five minutes before general release, if they paid a premium subscription, or for an even greater premium, five minutes and 2 seconds earlier. This second service was suspended when regulatory authorities began examining the June 3rd errors.

The use of computers for High Frequency Trading (HFT) means that latency within the transmission chain can be an issue. New York and Chicago are both important trading centres and are linked by conventional fibre cables. In secret, a second dedicated fibre network was built in a straighter line between the two cities, at a reported cost of $300 million, to bring a message round trip time down from 14.5 milliseconds to a claimed 12.98 milliseconds. Since fibre has a refractive index of 1.5, signals travelling though fibre are effectively slowed. This has kicked off a renewed interest in microwave communications. There have been an estimated 15 new microwave networks built between Chicago and New York aiming to get close to the theoretical round trip time of around 8 milliseconds.

In Europe, redundant military towers have been bought and new towers have been built to set up microwave networks between the major trading centres of Frankfurt and London and the other national exchanges. All of these efforts are to support HFT, which has replaced the human interface of “open outcry” bear pits and telephone dealing that was in place until toward the end of the last century, with a battle between geek-written algorithms. HFT has its critics, but the financial industry seems irreversibly addicted to it.

But the reliance on time isn’t limited to the Flash Boy community. A study published in 2011 estimated that about 6-7% of Europe’s Gross Domestic Product (GDP) – approximately 800 bn Euros (around US$ 900 bn) – was dependent on using GNSS data, and this was rising. Even our rural communities now use GPS to manage their harvesting and other farm equipment. This makes, for example, logistics companies, shipping companies and others vulnerable to jamming, spoofing or extreme space weather. Jammers, which allow company drivers to mask their movements from their management, or car thieves to hide from law enforcement, are available on the web for a few pounds despite being illegal. Their use also blocks GPS signals for several hundred meters around, with significant problems for emergency vehicles, for example. Spoofing is more sophisticated; it involves transmitting fake GPS signals and thus confusing, for example, the navigation system of a drone. This is believed to have happened in Iran. Finally, extreme space weather, such as a coronal event on the sun, can be sufficient to wipe out satellite systems’ electronics: all satellites in one swipe. This is not science fiction. A Carrington effect in the 19th century caused telegraph systems to burn out. And space weather is taken sufficiently seriously that it is among the top issues in the UK government’s Civil Emergency Risks Assessment, alongside war, a pandemic, and serious volcanic activity, like that which closed large sections of air traffic in 2010.  (There is more on this here).

Cell phones require synchronisation, and the window of time needed for synching is getting smaller and smaller. 1 millisecond was good enough for the early generations, 4G has to synch within 1.5 microseconds and 5G, which is being billed as an enabler for the Internet of Things, is being specified at under 200 nanoseconds.

One thing that is causing ripples in the time community is the leap- second. As we discussed earlier, the earth’s rotation is slowing, but we now have a nice fixed second. This means that, gradually, Coordinated Universal Time (UTC), the time system that is accepted internationally and based on the clocks at the NPL and many other centres around the world, moves out of phase with solar time. Now it could be argued this doesn’t matter that much but, in 1972, it was decided that it was a problem, and as the two moved apart, which happened at unpredictable times, there was a decision to add an extra second to a day. The last time this happened was at 23:59:60 on June 30th 2015. This can be a nightmare for companies relying heavily on close timing, as they have to insert this second into their systems. It is also a nightmare for commercial data-processing systems (and even your desk top PC). There have been several attempts to abolish leap seconds, and it will be debated by the ITU (International Telecommunication Union), the body responsible for maintaining UTC later this year.

Looking at time as an issue, it permeates the whole of electronics, from synchronising clocks with a chip or across a board to the issues of the Internet, the world’s biggest machine.



Note: This article was triggered by a lecture given to the London Network of the IET (UK’s version of IEEE) by Dr Leon Lobo of the NPL, to mark the 60th anniversary of the caesium clock. The NPL web site (npl.co.uk) is a treasure trove of information on time and the other SI measurements.

One thought on “Just a Second”

Leave a Reply

featured blogs
Apr 16, 2024
The accelerated innovations in semiconductor design have raised customers' expectations of getting smaller, faster, high-quality SoCs at lower costs. However, as a result, the SoC designs are getting complex, resulting in intricate design simulations and explosive data g...
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

GaN FETs: D-Mode Vs E-mode
Sponsored by Mouser Electronics and Nexperia
The use of gallium nitride can offer higher power efficiency, increased power density and can reduce the overall size and weight of many industrial, automotive, and data center applications. In this episode of Chalk Talk, Amelia Dalton and Giuliano Cassataro from Nexperia investigate the benefits of Gan FETs, the difference between D-Mode and E-mode GaN FET technology and how you can utilize GaN FETs in your next design.
Mar 25, 2024