editor's blog
Subscribe Now

Amazon creates Goldilocks-sized AWS EC2 F1 FPGA instance for cloud computing

AWS (Amazon Web Services) released for general use its FPGA-based EC2 F1 instances in its cloud computing lineup in July, 2017. The EC2 F1 instance is based on Xilinx’s 16nm Virtex UltraScale FPGAs and people have been using this cloud-based hardware acceleration capability to speed up the execution of diverse tasks including the implementation of CNNs (convolutions neural networks), video transcoding, and genome sequencing. I’m certain there’s been some experimentation with high-frequency equity trading as well, but no one’s talking. Not to me, anyway.

Problem was, you could either get one FPGA (the so-called “f1.2xlarge” instance) or eight FPGAs (the “f1.16xlarge” instance. But like Goldilocks, some customers undoubtedly found the f1.2xlarge instance to be “too small” and the f1.16xlarge instance “too big.”

How do I know?

I know because AWS announced a “this one is just right” f1.4xlarge EC2 F1 instance today with two FPGAs.

Details here.

 

Leave a Reply

featured blogs
Jan 26, 2022
With boards becoming more complex and lightweight at the same time, designing and manufacturing a cost-effective and reliable PCB has assumed greater significance than ever before. Inaccurate or... [[ Click on the title to access the full blog on the Cadence Community site. ...
Jan 26, 2022
PCIe 5.0 designs are currently in massive deployment; learn about the standard and explore PCIe 5.0 applications and the importance of silicon-proven IP. The post The PCI Express 5.0 Superhighway Is Wide, Fast, and Ready for Your Designs appeared first on From Silicon To Sof...
Jan 24, 2022
I just created a handy-dandy one-page Quick-Quick-Start Guide for seniors that covers their most commonly asked questions pertaining to the iPhone SE....

featured video

AI SoC Chats: Understanding Compute Needs for AI SoCs

Sponsored by Synopsys

Will your next system require high performance AI? Learn what the latest systems are using for computation, including AI math, floating point and dot product hardware, and processor IP.

Click here for more information about DesignWare IP for Amazing AI

featured paper

How an SoM accelerates and simplifies processor-based designs

Sponsored by Texas Instruments

If you're comfortable working with integrated circuits that have four to 48 pins, building a custom printed circuit board (PCB) for a new product might make sense. But when your design is complex—think: processor with more than 300 pins, DDR memory, eMMC, complex physical layout, and all the electrical considerations that go with it—a simpler, lower-risk, off-the-shelf product is often a better solution. Discover the benefits of a system-on-module (SoM) for complex, high-pin-count PCB designs.

Click here to read more

featured chalk talk

10X Faster Analog Simulation with PrimeSim Continuum

Sponsored by Synopsys

IC design has come a very long way in a short amount of time. Today, our SoC designs frequently include integrated analog, 100+ Gigabit data rates and 3D stacked DRAM integrated into our SoCs on interposers. In order to keep our heads above water in all of this IC complexity, we need a unified circuit simulation workflow and a fast signoff SPICE and FastSPICE architecture. In this episode of Chalk Talk, Amelia Dalton chats with Hany Elhak from Synopsys about how the unified workflow of the PrimeSim Continuum from Synopsys can help you address systematic and scale complexity for your next IC design.

Click to read more about PrimeSim Continuum