feature article
Subscribe Now

Redefining How Software Is Created

Cubicon Sees the IoT as an Opportunity for a Restart

The world they were planning to leave was a technology mess. They had watched how, in their short lifetimes, software had evolved from an obscure, tedious ritual practiced in large basements where the soundtrack mixed the hum of a room-sized mainframe computer with the whirr of Hollerith card readers, punctuated by the clatter of card punches, all the way to a commodity skill whose practitioners far outnumbered the dwindling numbers of hardware designers. Grade-school kids could now write software – either to run on their own computers or on some machine purportedly located in some cloud somewhere.

And through this process, new languages picked up from old languages. New paradigms replaced old ones. Single-thread architectures and assumptions gave way to parallel processors. Native compilation ceded to virtual machines, multiples of which could coexist within a single hardware machine.

Their physical world had experienced constant transformation for millennia. As civilizations came and went, they left an impact – perhaps buildings that fell into disuse and decay or trash heaps that were subsequently discovered and decoded by latter-day archaeologists. But useful bits were picked up by succeeding societies, who incorporated them as their own and then built upon them. Generations later, one might ask why a particular culture or language had this or that feature, and it might make no sense – unless you took into account the path it took to get there. It was all about legacy.

So it had been with software. Even as they readied their departure, new software was being balanced atop irregular mounds of legacy code written by countless long-departed programmers in dozens of languages implementing numerous styles. There were, in fact, severe problems with many components of this legacy, but no one dared disturb them. It was as if the entire edifice were a giant game of Jenga, and disturbing the wrong piece could cause it all to crumble. So they lived with the consequences and left the old code alone.

But what about the new world they were about to colonize? This was truly virgin territory – except that their new planetary home would have none of the convenient life-sustaining properties they were used to. They had to create it all. Their world would be a walled garden within a hostile environment. Everything would start fresh.

And they wondered, could their software start fresh? Could they take advantage of this extraordinarily rare opportunity to establish a new baseline, to sidestep the limitations of legacy, to put in place new rules that would apply lessons of the past and provide order and structure for the future?

Within our world here, we almost never get the chance for a dramatic do-over. Everything builds on everything else, and the past is dragged into the future. But a company called Cubicon sees an unusual opportunity – a wormhole into another world that might permit a fresh start. That world is the Internet of Things (IoT).

In this view, you have new participants creating new functions where there were none before. There is little or no legacy, which means that much fresh new code is being created, freed from the inheritance of the bad habits of the past. If there were ever to be a time to refresh software methodologies, this would be it. How clean this opportunity really is could be a matter for debate, but it’s one of the fundamental motivators for what we’re about to discuss, so let’s accept it for the moment, if provisionally.

Cubicon has a vision that exploits this opportunity. Founder Sandy Klausner has been working on ideas for the last 25 years, and they’ve come together in the form of a programming environment and infrastructural ecosystem that has some walled-garden characteristics. Creating new programs might even be slightly slower than it is now, but, overall, productivity and – especially – maintenance would see huge improvements that would more than compensate.

In many cases, I go deep on some of these new technologies, but this time I’m going to stay high-level because that’s where the interesting bits really are. It’s a change of mindset that’s easy to lose if you dive into the details. So let’s stick with some of the broad changes that the Cubicon vision implies and, in the process, lay out the essence of what they’re proposing. To be clear, these are my interpretations drawn from watching and reviewing Cubicon presentations and discussion with Mr. Klausner.

There is no new “code”

The Cubicon programming environment involves parameterizing existing code, not writing new code. This is because low-level components have been pre-defined and are invoked or instantiated rather than being created anew. Something as simple as addition involves summoning the “addition” component and providing operands as parameters. So the low-level code has already been written; a programmer assembles programs out of components.

1_Periodic_Table_of_Software_red.png 

Figure 1. The complete set of CubeComponents (Click to enlarge)

Programs contain intent and semantics in addition to syntax

While it might look like attaching a component is just another way of writing a line of code, that’s not the case. The component includes more than the function specified: metadata lies behind the functionality, and that metadata helps to specify intent and context. As one example, when a programmer invokes a component, the programmer’s ID is stored in the metadata. Provenance is literally built into every “line” of the program.

Writing a program involves filling out forms

The programming environment is created in a manner that encourages top-down design and refinement – all the way down to the lowest level of code. You’re not writing text code; you’re picking a component and then assigning properties and parameters to the component. This is a characteristic that might make the programming process somewhat more laborious than simply spewing out lines of code at high speed. But there’s no getting a line wrong because there’s no syntax to check. That means fewer errors and debug sessions.

 2_Component_Nesting_red.png

Figure 2. The full hierarchy of nested components from a level above process down to specific operations. (Click to enlarge)

In addition, because of the metadata, future programmers inheriting the code can better understand the underlying program structure and context, making maintenance far easier. The overall impact is intended to be experienced as vastly improved productivity.

All of that said, it is still possible to build programs bottom-up as well. Low-level components can be combined into methods or higher-level entities. At any point, these can become components in their own right. In fact, I believe that even legacy code would be wrappable into a component, if done carefully. The benefits of metadata would, of course, stop at the wrapper level; below that, all would be opaque.

Programs can be checked and “simulated” as they’re created

The environment includes the notion of a “repository” within which programs can be simulated for testing purposes. You can build a testbench to emulate the environment in which the program will eventually run. At this level, the program is executed in a machine-independent manner. Only when software is ready for deployment is it “distilled” into byte code for the target system.

Portability is enabled through virtual machines

The target system doesn’t run the program directly; it runs a CubeEngine – a virtual machine (VM) that consumes byte code. So Cubicon doesn’t need to know what processor you’re running when distilling your program. It cares about your processor only when creating the VM.

 3_Cubicon_System_red.png

Figure 3. The complete Cubicon world, including components, development, execution, and infrastructure. (Click to enlarge)

Component libraries can be built within repositories

These repositories can be private or cloud-based. This is the ultimate in code reuse: once a new component is built out of the base components (and presumably has been thoroughly tested and validated), it can become a component for use by anyone authorized to use it. Yes, this can be done today (minus the metadata), but often isn’t. Here it’s built into the environment.

This component reuse model also enables a robust software IP business.

Programs can talk to each other

Cubicon abstracts program-to-program messaging over long distances, leveraging low-level transport. They’re trying to get it standardized as a layer above TCP/IP, and they may sell this to carriers as a service that the carriers could offer their customers.

Design replaces programming

Because of the top-down successive refinement characteristic of the environment, you no longer have this situation where architects and designers define what’s needed and then hand that to a programmer to code. The “programming” is just the next level of refinement. It could well be the case that, below a certain “line” in the hierarchy, someone else takes over, but even in that case, the nature of the work becomes more design-like and less coding-like.

Put this all together, and you have a self-contained software biosphere that should, in theory, provide everything you need in a manner that encourages programming practices that will reduce the costs of maintenance and legacy.

The next big question is, how does Cubicon as a company fit into the picture? I started wondering whether somehow they’d eventually be placing themselves into the middle of all software – in development and deployed – everywhere for all time, given a model like this. So I inquired about this and their business model in general.

First of all, for software that’s deployed and not doing something esoteric between repositories and such, then software that’s working should continue working regardless of Cubicon. The VMs will keep running, as will the code executing on them. There’s no ongoing tax – er, licensing fee – to ensure that software keeps running. It doesn’t appear that Cubicon would be in a position to “pull the plug,” so to speak. I’m not suggesting any evil intent on their part, but if I were writing software and there were a close tie-in with some specific company in order for my systems to keep running, I’d be cautious. In this case, they’re involved during development and deployment, but then they can be out of the picture after that.

That being the case, what’s in it for them? Their business model is still evolving, but they did share some of the ways in which they could earn revenue, and those ideas also provide scenarios that are enabled by this environment. To be clear, I don’t think any of these scenarios is in place yet, so these are ways they could charge.

  • Designer registration for using the Cubicon website could be a paid service
  • Various services could be created by Cubicon customers that leverage the Cubicon environment while executing; a variety of microfees could be associated with such services if they’re monetized. (This assumes use of Cubicon repositories and cloud infrastructure, for example, in contrast to programs that simply run on their own.)
  • While the IDE would be free, components could involve a per-use fee.
  • Data and program storage in the Cubicon repository could involve a fee.
  • CubeEngine use could involve a charge per instance.
    • This could be triggered at first activation for software VM implementations. This would imply a connectivity requirement so that the instance could phone home on activation. I’m unclear on that detail at the moment.
    • If the VM were hardened into silicon, then there could be a per-chip fee.
  • As mentioned, CubeProtocols could be licensed to major carriers so that they could offer long-distance connectivity.

Moving to this model does not feel trivial to me. Especially if you dive into the details, this feels like a significant rethink on how things are done. That’s not necessarily a bad thing, but it is a tough thing to make happen in a world where folks are simply trying to get product out the door as quickly as possible.

And, given the IoT greenfield market thesis, there’s a window here that will eventually close. At some point there will be IoT edge-node legacy code (in reality, there already is some), and that innocence lost will also make it harder to make a clean transition. It feels like there’s some urgency here or else the opportunity for change may pass. So Cubicon’s challenge will be to get some traction with folks that matter so that they can join the conversation in a more meaningful way.

 

(All article images courtesy Cubicon.)

 

More info:

Cubicon

8 thoughts on “Redefining How Software Is Created”

  1. I believe the problem is the per use fees. Unless this software package has a form that exactly interfaces to the IoT device as a plug and play fit, then there will still be some programming costs for the “customization”.

    If the IoT device is a light switch, and the expected install base is 3 billion units over 15 years at a commodity price of $4/ea the numbers matter.

    The per use fees have to total significantly less than the NRE plus ongoing enhancement costs.

    There has to be some exceedingly high functionality in this package, to justify a few $M to few hundred $M in fees, over what an in-house team can do.

    If this is a 20 off custom device for manufacturing floor monitoring/control … the software package might have some value, if it is again completely plug and play, and doesn’t require “customization” code from a skilled programmer.

    I suspect if it’s that canned, Atmel will have the package available on the web site to entice using whatever SoC they are pushing.

  2. “IoT greenfield market thesis” – uh, no. Sorry the IoT (and Wearable) segments already are establishing extensive roots leveraging existing embedded code and tools.

    This sounds like a retread of our 1980s experience with 4th Generation Languages (4GL), with a helping of OOP, mixed into a cup of virtual machine panacea. Nothing new here.

    If we *REALLY* want to reshape software and system development we need to start here: https://www.dreamsongs.com/RiseOfWorseIsBetter.html
    and rethink software development/engineering.

  3. Pingback: Bdsm
  4. Pingback: DMPK Services

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Switch to Simple with Klippon Relay
In this episode of Chalk Talk, Amelia Dalton and Lars Hohmeier from Weidmüller explore the what, where, and how of Weidmüller's extensive portfolio of Klippon relays. They investigate the pros and cons of mechanical relays, the benefits that the Klippon universal range of relays brings to the table, and how Weidmüller's digital selection guide can help you choose the best relay solution for your next design.
Sep 26, 2023
23,680 views