feature article
Subscribe Now


Much More than Just a Wriggly Line

If you are really up-to-date on what is happening in the world of oscilloscopes, then I am afraid that this Embedded Technology Journal Update is not for you – unless you want to go to our comments page and add your two cents’ worth of correction. But if, like me, you were vaguely aware that things are changing in the measurement field, then brace yourself.

The cathode ray tube, with its wriggly signal, (OK, with its wave form) was so much the shorthand for “electronics” that The Plessey Company, for a while Britain’s leading electronics company, used a stylized screen trace as its logo. With the rise of digital systems, another form of analysis tool, the logic analyzer, was developed to look at the zeros and ones and provide, as its name suggests, some analysis of what was happening. These two sat beside each other on the bench, but the two boxes are now increasingly merging into just one.

The last few years has seen an explosion in the use of high speed serial buses. For moving large quantities of data around a system, or between different systems, serial communications dominate. USB 3, HDMI, SATA, and PCI Express (in its various incarnations) are all driving communication forward. (In lower bandwidths, buses like CAN, I2C and FlexRay are now well established.)

Debugging systems using these high speed buses is a significant challenge, and the oscilloscope manufacturers are rising to this, not just with improved hardware, but also with software, since today’s oscilloscopes are, from one perspective, effectively high-end, special-purpose PCs. They have evolved from the all-analog circuits driving the CRT to all-digital systems with fast processors running an operating system (normally Windows), supported by hard disks, high resolution screens, and their own high-speed communications links. Built on top of these is the hardware that is specific to an oscilloscope: this includes the analog-to-digital conversion (ADCs) devices (often many of these, interleaved to get the speed needed for each channel) and the links from ADCs to the system under test. Although these are still called probes, hooking into a board brings its own issues. Behind the ADCs will be specialist high speed memory and, since engineers like to twiddle knobs, there is circuitry to link the array of knobs on the fronts of the boxes to the system.

(Some companies argue, “Why buy yet another PC? Instead, buy one of the PC add-on boxes we sell.” We will discuss this approach later.)

Normally, you use an oscilloscope to identify, to remove or correct, noise, jitter and timing issues. In oscilloscope specification terms this means looking at the technology for capturing, storing, displaying, and analyzing the signals.

For capture, the oscilloscope’s bandwidth and sampling rates are important, and the bandwidth is going to be defined at around two to three times that of the clock of the signal under test. This has lead to a spec-sheet war between the oscilloscope suppliers like Le Croy, Agilent and Tektronix. At the top end of the data sheets will be around 30 GHz bandwidth, with sample rates 2.5 to 3 times that; for example, 80 GS/s (Giga samples per second). One data sheet also cites memory sufficient for 512 Mega points of analysis and edge triggering of greater than 15 GHz.

These oscilloscopes will have multiple channels for both analog and digital data and will have large screens (up to 15”) for data display. These large screens are necessary, not just for clearer and easier to read display of the analog signal, but also to make it easier to display both analog and digital at the same time.

For these high speeds, it has been a long time since probes have been simple, passive wires. Clearly, as soon as you attach a probe to a circuit, it immediately becomes a part of the circuit and exerts an influence on the signal that is under test. So probes themselves are now complex, active, subsystems with characteristics that need to be closely matched to signals and devices.

Larger memory, as well as assisting the torrent of data from the system under test to the display screen, also provides a huge hike in problem solving. For example, setting a trigger that is activated when a specific state occurs is not an uncommon test activity. With large amounts of memory, it is now possible to examine not just the system state when the trigger is activated, but also the activity before the trigger state: the amount of activity that can be displayed is obviously related to the amount of memory. Since the primary cause of an event can take place several seconds before the event occurs, which for a high speed channel can represent many megabytes of data in both the digital and analog flows, the greater the depth of memory, the greater the chance that it can be identified.

Those of you with long memories may recall Polaroid camera attachments. These were mounted onto the front of the CRT and, if you pressed the button at exactly the right time, they captured a particular image permanently. Younger readers may remember printing oscilloscopes that drew traces on scrolling paper, either with nibs at the ends of arms (think of the lie detector in many movies) or with an array of inkjet heads. Today, both captured and analyzed data is stored on a hard drive and transmitted, via USB, through Ethernet or through proprietary buses to “ordinary” PCs for further analysis and for project documentation and archiving. There is clearly significant value in being able to access the actual test and conformance data at future times, for certification or for field support.

Ethernet capability, as well as providing communication for data storage, also allows live analysis to be seen by others not in the test lab, or for the oscilloscope to be controlled remotely.

Since the upper-end oscilloscopes have significant processing capability, it is possible to add functionality through software. One of the most significant of these additions is protocol identification and analysis. The ability to correlate a part of the message, say the message length identifier in a header package, with the other elements of the signal can be extremely valuable in helping to identify strange elements.

Protocol understanding can also be used to assist in protocol conformance testing for new controller chips. The larger manufacturers provide software that reads the data being transmitted from a device under test and evaluates whether it matches the defined standard. For receiving devices, the oscilloscope provides traffic and then measures how the device is coping.

The functionality that has been developed for the extreme machines is now migrating down to the midrange machines. While they can neither reach the ultra-high speeds of the top end machines nor match the sampling rates or memory depth, they are more than adequate for measuring and providing conformance testing for the established serial standards and for a wide range of other applications.

The lower-end products from the larger manufacturers are also benefiting from trickle down, although with fewer channels, or maybe only analog capability. The bigger and more established companies are using their feature-rich oscilloscopes to maintain their sales in the face of very low-cost competition from the Far East, particularly China. These instruments, originally not much more than digital versions of the old analog CRT boxes, are beginning to provide very cost-effective options for simple testing or production line use.

So far we have talked about only the dedicated boxes, but there is a significant growth in “add-in oscilloscopes.” These are external boxes that connect to the PC (usually through USB) and carry out much of the oscilloscope functionality, using the PC for the PC functionality. These are often viewed as just a low-performance, low-cost option, particularly useful for field support, since the field staff will normally carry a laptop. But external oscilloscopes are now pushing well into the mid- to upper-range field, with the top end having bandwidth exceeding 10 GHz. These have to be a serious option for a wide range of test and measurement activities.

You can spend serious money on top-end oscilloscopes. By the time you have added complex probes, options, and software packages, you can easily blow a quarter of a million dollars. (Yes, I do mean $250,000.) On the other hand, for a couple of hundred dollars you can have a very competent piece of kit, with far higher speeds and functionality than were generally available only a few years ago.

Good oscilloscopes are still the defining badge of the electronics engineer. Matching oscilloscope capabilities, such as core hardware, accessories, probes, and software, against the system under development can provide otherwise unobtainable information about the system. And knowing the capability of the oscilloscopes that will be used in device and system test and bearing those capabilities in mind during system development and board layout can play a significant part, not just in improving testing but in reducing time-to-market and improving end-product quality. And those top-end oscilloscopes are just amazing.

Leave a Reply

featured blogs
Apr 18, 2021
https://youtu.be/afv9_fRCrq8 Made at Target Oakridge (camera Ziyue Zhang) Monday: "Targeting" the Open Compute Project Tuesday: NUMECA, Computational Fluid Dynamics...and the America's... [[ Click on the title to access the full blog on the Cadence Community s...
Apr 16, 2021
Spring is in the air and summer is just around the corner. It is time to get out the Old Farmers Almanac and check on the planting schedule as you plan out your garden.  If you are unfamiliar with a Farmers Almanac, it is a publication containing weather forecasts, plantin...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

From Chips to Ships, Solve Them All With HFSS

Sponsored by Ansys

There are virtually no limits to the design challenges that can be solved with Ansys HFSS and the new HFSS Mesh Fusion technology! Check out this blog to know what the latest innovation in HFSS 2021 can do for you.

Click here to read the blog post

Featured Chalk Talk

Rail Data Connectivity

Sponsored by Mouser Electronics and TE Connectivity

The rail industry is undergoing a technological revolution right now, and Ethernet connectivity is at the heart of it. But, finding the right interconnect solutions for high-reliability applications such as rail isn’t easy. In this episode of Chalk Talk, Amelia Dalton chats with Egbert Stellinga from TE Connectivity about TE’s portfolio of interconnect solutions for rail and other reliability-critical applications.

Click here for more information about TE Connectivity EN50155 Managed Ethernet Switches