feature article
Subscribe Now

Measuring Air

Browser Benchmark Diagnoses End-to-End Connections

How fast is your browser? Not fast enough, I’m sure. But pinning down the actual speed of a Web browser is a tricky thing. It might be fast at rendering images but be stuck with a slow TCP/IP stack. Or it may be quick at interpreting JavaScript applets but suck at Flash animation. If you’re a programmer, creating and debugging an embedded browser is a big, complex project that opens up a whole big stack of interrelated bits and pieces that can go wrong.

To help you wade through this complexity, the nice folks at the Embedded Microprocessor Benchmark Consortium (EEMBC for short) have created BrowsingBench, “a tool that determines the effectiveness of hardware and software in processing and displaying Web pages.” BrowsingBench digs into why your browser is slow—or where it’s fast—so that you know where to focus your efforts. 

There are a couple of browser benchmarks available already, but EEMBC believes its BrowsingBench is unique in providing an end-to-end diagnostic of all the layers and components that make up the end user’s experience. For example, it measures everything from the first mouse click (or screen tap) on a link all the way until the entire page is loaded and rendered. BrowsingBench is available to any EEMBC member.

To make it more of a “real world” test than other synthetic benchmarks, BrowsingBench uses actual Web content, such as Google, Yahoo, Wikipedia, and other sites that an end user is likely to visit. (The content has been frozen to avoid day-to-day variances in benchmark results.) All the content is delivered from a live Apache server, so there are no canned or cached pages as part of the test. It’s that server, not the client being tested, that measures loading and rendering times. This avoids problems (and outright cheating) by clients that don’t have accurate clocks or timers, and makes results more repeatable and verifiable.

BrowsingBench works on pretty much any of the standard browsers, including Internet Explorer, Safari, Firefox, and Opera. JavaScript is about the only required element. You can even dial-in extra latency to simulate slow wireless connections. The Flash component can be turned off for browsers that (ahem) choose not to support it. If you’re building a device with an embedded browser, you should check it out.

In other news, last week saw the conclusion of the 23rd annual Hot Chip conference at Stanford University. Held at, though not by, Leland Junior’s eponymous campus, it featured the usual assortment of nerds with PowerPoint slides. Adding to the milieu was your humble narrator, hosting an evening panel entitled “The Ecosystem Wars: It’s Not Just About Architecture.”

On the panel, we had representatives from Intel and ARM, the two processor companies that come most readily to mind for embedded designers; speakers from two software companies; and for variety one participant who was a PhD CPU designer, Air Force pilot, and IEEE Fellow.

We quickly agreed that yes, ecosystems are important, but rapidly diverged on what “ecosystem” means in the context of engineering development. Sure, it includes the compilers and debuggers available for a specific chip; maybe also the operating systems and EDA tools. But are motherboards part of the ecosystem? What about batteries, open-source software, or training courses? To some, the ecosystem included absolutely everything but the chip itself. Which may be correct.

Interestingly, the ARM speaker credited that company’s business model for the growth and success of its ecosystem. ARM’s customers—the company prefers to call them partners—succeed precisely because ARM doesn’t make chips. Instead, it provides a “framework” or a “platform” on which software, tool, and silicon companies can base their own products.

In direct contrast, the Intel representative made the same claim by crediting his employer’s business model, even though it’s diametrically opposed to ARM’s. The two companies could not be more different in terms of how they engage with developers, yet both were convinced (in public, at least) that their approach was the exact right one for nurturing a big, strong, happy ecosystem. Such is the nature of competition.

It’s hard to argue with success. At last count, there were 12 times more ARM processors in the world than Intel processors. Twelve times! That’s a whole order of magnitude and then some. Use that factoid to win your next bar bet. And ARM is a much younger company than Intel, so it’s achieved this remarkable popularity in less time.

And yet ARM isn’t nearly as rich as Intel. Wall Street values ARM at $30 billion while Intel is worth $100 billion. So even though ARM’s chips are more popular, Intel is still worth more as a company. That’s because Intel builds and sells actual chips, while ARM merely waves its corporate hands pontifically, intoning “thou art hereby granted permission to make chips in my image.” ARM collects a small royalty (on the order of 5–25 cents) for each chip, while Intel sells physical chips at full price and keeps all the money. For all of ARM’s popularity and success, Intel still seems to have the more lucrative business model.

Toward the end of the panel, we tossed around the idea of new CPUs succeeding in today’s market. If you introduced a brand new CPU architecture today, would it have any chance of success? Or would the combined ecosystems of all the established processor vendors present an irresistible force that’s impossible to overcome?

Certainly ecosystems present a strong barrier to entry. The established players (Intel, ARM, MIPS, Freescale, PowerPC, etc.) all have tools and software that a newcomer wouldn’t have. Any neophyte processor company would have to develop a parallel ecosystem, a process that takes years and costs million of dollars. Any new processor family would have to be clearly superior to—not just different from—existing chips to stand any chance of survival. Making processors is the easy part; building third-party support for them is a killer. 

Leave a Reply

featured blogs
Apr 17, 2024
The semiconductor industry thrives on innovation, and at the heart of this progress lies Electronic Design Automation (EDA). EDA tools allow engineers to design and evaluate chips, before manufacturing, a data-intensive process. It would not be wrong to say that data is the l...
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Introducing QSPICE™ Analog & Mixed-Signal Simulator
Sponsored by Mouser Electronics and Qorvo
In this episode of Chalk Talk, Amelia Dalton and Mike Engelhardt from Qorvo investigate the benefits of QSPICE™ - Qorvo’s Analog & Mixed-Signal Simulator. They also explore how you can get started using this simulator, the supporting assets available for QSPICE, and why this free analog and mixed-signal simulator is a transformational tool for power designers.
Mar 5, 2024
4,770 views