feature article
Subscribe Now

Want a Database Tailormade for Edge Computing?

We are constantly being informed about computationally intensive tasks moving out to the edge, like artificial intelligence (AI) and machine learning (ML), for example. Sad to relate, one supporting capability required by many of these applications that doesn’t receive as much attention and discussion as it should is the topic of databases. Specifically, databases tailormade for deployment on the edge in microprocessor units (MPUs), microcontroller units (MCUs), electronic control units (ECUs), and so forth. Fortunately, I have you covered…

As I’ve mentioned on occasion, I’m very lucky because—in addition to my sparkling wit, outrageous good looks, and internationally commented-on sense of fashion (my mum lives in England and she often comments on the way I dress)—I have very few regrets in my life. Most of the regrets I do have pertain to things I wish I’d learned more about when the opportunity was dangled under my nose. One such field of study is that of databases.

These days, of course, when we hear the term “database,” our Pavlovian response is to think of an electronic computer-based implementation. However, if we consider the most generic definition of a database as being “an organized collection of structured information or data,” we could go back as far as the Sumerians, Babylonians, and Egyptians, all of whom came up with techniques to keep track of large amounts of information.

Coming a little closer to home (from a temporal perspective), if you haven’t already done so, I strongly recommend reading The Professor and the Madman: A Tale of Murder, Insanity, and the Making of the Oxford English Dictionary by Simon Winchester. Prior to my perusing this tome I had no idea that the topic of dictionaries could be so interesting. I also hadn’t appreciated the complexities involved. In 1857, for example, when the Philological Society of London called for the creation of a new English dictionary—a monumental undertaking that was destined to become the renowned Oxford English Dictionary (OED)—some of the members argued that the meanings associated with words should be restricted to their usage at that time. Contrariwise, other members believed that the evolution of every usage of every word should be documented in full, including references as to each usage’s first appearance. Thankfully (considering that languages are living, breathing, and evolving beasts, figuratively speaking), it was the latter group that prevailed.

Can you conceive the complexity of this task when undertaken without the aid of computers? How do you even go about gathering a list of all the words that exist and need to be documented? Can you imagine finally laying out a new section, only to have someone run in and say they’d remembered the word “adjure,” the insertion of which would require re-typesetting a whole bunch of pages? This may explain why it wasn’t until 1879 that the rules of engagement had been defined and the real work finally commenced. It may also explain why, five years later in 1884 (this was five years into what was originally intended to be a ten-year project), the editors reached the word “ant” (the final volume of the OED wasn’t published until 1928, at which time the editors had to start working on supplements to cover all the new words that had emerged since the commencement of the project).

Another well-known example is the library organizational system called Dewey Decimal Classification (DDC), the first version of which was published in the United States by Melvil Dewey in 1876. Colloquially known as the Dewey Decimal System, this is essentially—some may say quintessentially—a database. And, of course, many organizations and institutions (businesses, hospitals, governmental departments, etc.) developed and maintained their own hard-copy databases well into the 20th century.

I was just about to say that it would be hard to visualize a database based on perforated paper products (e.g., paper tapes and punched cards), but then I remembered Herman Hollerith, who devised a system based on punched cards and automatic electrical tabulating machines that was used to perform the American census in 1890 (in 1924, Hollerith’s company changed its name to International Business Machines, or IBM).

When electromechanical and electronic computers started to appear on the scene in the 1940s and 1950s, many of their storage mechanisms were sequential in nature, such as magnetic tapes, for example. Although this didn’t preclude the implementation of databases per se, it certainly didn’t make things easy. In fact, it wasn’t until direct access storage media such as magnetic disks became widely available in the mid-1960s that computer-based databases really started to take off.

The first databases circa the mid-to-late 1960s used pointers (often physical disk addresses) to follow relationships from one record to another. The concept of a relational model—which led to today’s relational databases that allow applications to search for data by content rather than by following links—was first proposed in 1969 by English computer scientist Edgar F. Codd.

There’s so much to databases that you could write a book—and many people have done so—but a nice high-level overview is provided in Chapter 8 of 9 Algorithms That Changed the Future by John MacCormick. This introduces topics like relational databases, transactions, write-ahead logging, and two-phase commit in terms that even I can understand.

If we now jump ahead to the present day, we have the most incredible databases containing almost inconceivable amounts of data, much of which is to be found in a fog or a cloud. Although fog computing is performed in servers close to the edge (as opposed to cloud computing, which is performed in remote data centers), this isn’t the extreme edge where the rubber meets the road in the form of the “things” that comprise the internet of things (IoT). That is, the connected devices—sensors and actuators—that act as the final interface between the internet and the real world (assuming, of course, that we aren’t all part of a Matrix-style simulation, in which case all bets are off).

This is the point where things start to get “interesting” on the acronym front, so take a deep breath and hold on tight. When we think about cloud-based databases, two architectural approaches spring to mind. The first, and more traditional approach, is online transaction processing (OLTP), which is a type of database system used by transaction-oriented applications (the “online” portion of the moniker refers to the fact that such systems are expected to respond to user requests and process them in real-time, as opposed to their predecessors, which performed their tasks offline, often in the dead of night).

A different, more recent database system involves online analytical processing (OLAP), whose mission is to quickly satisfy multi-dimensional analytical (MDA) queries, such as those used in data mining, for example. Whereas OLTP systems process all kinds of queries (read, insert, update, and delete), OLAP is generally optimized for read only and might not even support other kinds of queries.

And then we have hybrid transaction/analytical processing (HTAP), which is a term that was coined in 2014 by the information technology research and advisory company Gartner Inc. As defined by Gartner, HTAP is an emerging application architecture that “breaks the wall” between transaction processing and analytical processing.

Now, all of the above is wonderful if you are intending to frolic with databases in the cloud, but what about applications running at the extreme edge? Well, as a starting point, let’s begin by considering embedded systems in general, many of which live “close to the edge” (yes, of course I’m now thinking of Close to the Edge by YES).

But we digress… You might be surprised to discover that there are a bunch of embedded databases, by which I mean databases intended for use in embedded systems. The most widely deployed of these is SQLite. Written in the C programming language, SQLite is not a standalone application, but is instead a library that software developers embed in their applications. SQLite is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems. 

Some advantages of SQLite are that it’s free, it’s well-adopted by the open-source community, and it’s part of all the embedded Linux packages floating around. Contrariwise, some disadvantages are that SQLite was not designed with the IoT in mind, it’s tailored to OLTP applications, and it’s not well-suited to the OLAP tasks that are characteristic of AI and ML algorithms. There’s also the fact that, since it’s free, you get the support you paid for (i.e., zero), which means you are obliged to become your own database company, managing updates and upgrades to add missing features, and implementing things like replication and mirroring yourself (or paying someone else to do it for you).

“Oh, woe is me… what shall I do… is there a database solution tailored to the extreme IoT edge out there?” I hear you cry. Wipe away those tears and turn that frown upside down into a smile, because I’m about to brighten your day and make your world a better place (and how often do you expect to hear someone say that to you today?).

I was just chatting with Sasan Montaseri, who is the president of ITTIA. I was poised to say that one day I’ll have to ask Sasan what the letters in ITTIA stand for, and then I thought, “why not now?” So, I emailed him, and he immediately responded, “I Think, Therefore I Am” (from the pithy Latin phrase “cogito, ergo sum”). Of course, this immediately reminded me of the corresponding phrase “I Drink, Therefore I Am” (which would, I suppose, be “bibo, ergo sum” in Latin) from The Philosopher’s Song by Monty Python.

Founded by Sasan 2000, ITTIA always had a focus on embedded data management. Originally, the guys and gals at ITTIA offered consulting services with no thought of creating their own database product. Over time, however, more and more customers presented embedded database requirements that could not easily be satisfied with existing solutions, thereby driving Sasan and his colleagues to develop their own product, which they released in 2007.

As Sasan told me, “We went from mainframe to desktop, from desktop to web application, from web application to mobile, and from mobile to embedded. And now we have our sights set on the IoT.” Starting around 2018, the folks at ITTIA started work on a new implementation of their database technology focused on IoT devices operating at the extreme edge. In fact, they have two flavors of this bodacious beauty: ITTIA DB IoT, which is targeted at MCU-type devices with lesser compute and memory resources, and ITTIA DB SQL, which is targeted at MPU and ECU-type devices with greater compute and memory resources.

The ITTIA DB (in both its incarnations) was built from the ground up to handle the time series data prevalent in IoT edge devices. Furthermore, you can think of the ITTIA DB as being “two engines in one” because it’s an HTAP time series database that can handle the transactional requirements of OLTP applications and the streaming analytical demands associated with OLAP applications.

I’d like to show you some eye-candy imagery, but databases don’t lend themselves to that sort of pretentious ostentation, so we will have to be satisfied with a couple of videos as follows:

I don’t know about you but, prior to my conversation with Sasan, I’ve never really given much thought to the concept of having a database in an edge device. In my own defense, until relatively recently, most edge devices were performing only simple tasks like monitoring the temperature and causing some action to take place if things got too hot or too cold. Now, of course, edge devices are using AI and ML to perform all sorts of interesting tasks—predictive maintenance springs to mind—for which an HTAP-capable edge database offers numerous and significant advantages.

If you wish to learn more, I invite you to bounce over to the ITTIA website and connect with the folks you’ll find there. In the meantime, I’d love to hear any database-related tales you’d care to share, especially those from long ago when I wore a younger man’s clothes.

One thought on “Want a Database Tailormade for Edge Computing?”

Leave a Reply

featured blogs
Apr 26, 2024
Biological-inspired developments result in LEDs that are 55% brighter, but 55% brighter than what?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Enabling the Evolution of E-mobility for Your Applications
The next generation of electric vehicles, including trucks, buses, construction and recreational vehicles will need connectivity solutions that are modular, scalable, high performance, and can operate in harsh environments. In this episode of Chalk Talk, Amelia Dalton and Daniel Domke from TE Connectivity examine design considerations for next generation e-mobility applications and the benefits that TE Connectivity’s PowerTube HVP-HD Connector Series bring to these designs.
Feb 28, 2024
8,300 views