feature article
Subscribe Now

Amazon’s Head in the Cloud

Remarkable Depth and Breadth in Cloud Computing, and an Intriguing New Service

At face value, it is a bit of a brain twister: Amazon’s goal of being the “everything store” on the one hand, that is, and its massive cloud services business on the other. At first glance, not exactly peanut butter and chocolate. Walmart and Costco are not actively hawking their data processing capabilities—which one imagines as quite formidable—on the open market.

Turn the clock back a few years and it makes sense. Amazon developed their massive datacenters in-house because their requirements could not be readily met with existing solutions. As time passed, they developed more and more value-added differentiation. And, at some point, someone thinking well outside the box suggests “let’s monetize our unique datacenter capabilities by selling them in the emerging cloud computing market.”  At least that is how I envision it going down; I am sure reality was more nuanced and more interesting.

Amazon Web Services (AWS) is massive in every regard. Jeff Bezos isn’t talking numbers, but general consensus is that AWS consists of 1-2 million physical servers; sorry for the broad range there—again, nobody’s talking specifics.

Every bit as impressive as the sheer scale of the datacenter is the scale of the various services offered; AWS provides FAR more than raw CPUs horsepower. Just a sampling:

  • Storage in many different cost-performance configurations
  • Databases from pedestrian SQL to powerful column-oriented tools
  • Complete virtual desktop environments
  • Services as specific as media transcoding

All of the above and much more are available on-demand, in limitless quantity and at remarkably low pay-as-you-go prices. And at the risk of stating the obvious, many name-brand B2C and B2B cloud services are built and run entirely atop AWS.

While AWS represents just 2% of Amazon’s revenue, it is growing at roughly 40% YoY versus roughly 25% for merchandise. And it is not hard to imagine that AWS is more profitable than Amazon’s overall business, because, well, the company’s overall business has almost never been profitable.

Integrate all of the above factoids and one concludes that AWS is very important to Amazon and continues to garner plenty of attention from the company.

Connecting a few dots, AWS recently announced a new ‘C4’ instance using Intel’s shiny new octadeca core Xeon processor. That didn’t take very long. AWS will offer the C4 instance in multiple configurations from 2 to 36 virtual (hyperthreaded) cores and 4GB to 60GB of RAM. Almost exactly a year ago I mused:

Ideally, we want the ability to combine CPUs on multiple nodes into a super-node as the application workload demands.

While not the vector I was suggesting at the time, this new C4 instance is a solid (though less imaginative) solution to providing wide dynamic range scalability.

Speaking of imaginative, last month AWS announced a new service called Lambda. The funky name requires an explanation:

AWS Lambda is a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.

“Quickly” is loosely defined as milliseconds; I say “loosely” because at one point 100 milliseconds is cited. Lambda provides a remarkable level of abstraction: write your function in Java Script (the only option for now) and Lambda handles ALL of the provisioning. No bothering with infrastructure, instances … nada. Triggering events come in all shapes and sizes: activity from other AWS services, data held in AWS … or external devices via Amazon Kinesis (“a fully managed service for real-time processing of streaming data”).

Now THIS is interesting, notably for IoT. I’ve discussed hybrid mobile-cloud and hybrid IoT-cloud computing over the past year. AWS Lambda enables all sorts of creative hybrid compute applications AND hides the vast majority of the cloud complexity to boot. An IoT device can trigger an event that executes complex analytics well beyond what is possible with the local microcontroller horsepower. IoT streams can interact with massive databases (my augmented reality application, as discussed in the first link in this paragraph). The aforementioned cornucopia of AWS services can be brought to bear.

And AWS Lambda appears to be extraordinarily cost-effective:

  • $0.20 per million event triggers
  • $0.0000021 per second of execution time with 128MB of RAM

I say “appears” because the clever marketing person who developed the pricing made certain that all of the numbers are VERY small (jillioniths of a penny!) while not exactly providing a lot of detail (execution time on what flavor instance?) one needs. Let’s hope that the result is NOT AT ALL like the passenger who recently racked up a $1200 bill for transpacific Wi-Fi.

How might we employ AWS Lambda in an IoT application? Picking one crazy idea at random here: imagine your teenager comes home late, triggers the security system AND forgets to disarm it. This creates a trigger to AWS Lambda:

  • Cameras at the front door and foyer snap photos in response and upload the images
  • AWS performs facial recognition and matches against a “white list” of known not-intruders
  • Assuming we get a STRONG match against your teenager, a full alarm is averted and replaced by a stern verbal warning. (Alternatively, the IoT deadbolt could simply lock him or her in the house.)

The above local/cloud workload partition plays to strengths: the facial recognition would require a fair amount of local compute horsepower, and depending on how (one should hope) infrequently the algorithm runs, it is far more efficient to pay-as-you-go in the cloud. This is a CRAZY idea, as noted above, especially given my near-paranoia on all things security related. Yet, if we use elliptic-curve cryptography to secure … topic for another day. In any case, I am VERY keen to see how AWS Lambda is used in real-world IoT applications, especially after a bit of experimentation and Darwinian selection.

Leave a Reply

featured blogs
Apr 18, 2024
Analog Behavioral Modeling involves creating models that mimic a desired external circuit behavior at a block level rather than simply reproducing individual transistor characteristics. One of the significant benefits of using models is that they reduce the simulation time. V...
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Portable Medical Devices and Connected Health
Decentralized healthcare is moving from hospitals and doctors’ offices to the patients’ home and office and in the form of personal, wearable, and connected devices. In this episode of Chalk Talk, Amelia Dalton and Roger Bohannan from Littelfuse examine the components, functions and standards for a variety of portable connected medical devices. They investigate how Littelfuse can help you navigate the development of your next portable connected medical design.
Jun 26, 2023
33,565 views