editor's blog
Subscribe Now

Amazon creates Goldilocks-sized AWS EC2 F1 FPGA instance for cloud computing

AWS (Amazon Web Services) released for general use its FPGA-based EC2 F1 instances in its cloud computing lineup in July, 2017. The EC2 F1 instance is based on Xilinx’s 16nm Virtex UltraScale FPGAs and people have been using this cloud-based hardware acceleration capability to speed up the execution of diverse tasks including the implementation of CNNs (convolutions neural networks), video transcoding, and genome sequencing. I’m certain there’s been some experimentation with high-frequency equity trading as well, but no one’s talking. Not to me, anyway.

Problem was, you could either get one FPGA (the so-called “f1.2xlarge” instance) or eight FPGAs (the “f1.16xlarge” instance. But like Goldilocks, some customers undoubtedly found the f1.2xlarge instance to be “too small” and the f1.16xlarge instance “too big.”

How do I know?

I know because AWS announced a “this one is just right” f1.4xlarge EC2 F1 instance today with two FPGAs.

Details here.

 

Leave a Reply

featured blogs
Jun 25, 2019
'Virtuoso Meets Maxwell' is a blog series aimed at exploring the capabilities and potential of Virtuoso RF and Virtuoso MultiTech. So, how does Virtuoso Meets Maxwell? Virtuoso now supports... [[ Click on the title to access the full blog on the Cadence Community s...
Jun 25, 2019
Over my 25 plus years of being a PCB designer I could not imaging going back to designing a PCB like I did in the late 90’s or even early 2000’s.  New technology is always being added to tools we use that helps simplify our job.  The key is making sure you'€™r...
Jun 25, 2019
During a recent visit to Seattle, I learned about the Great Seattle Fire of 1889. In less than 24 hours the entire business district, about 25 city blocks, its railway stations and several wharves were destroyed. Instead of moving the city to start over, they decided to rebui...
Jan 25, 2019
Let'€™s face it: We'€™re addicted to SRAM. It'€™s big, it'€™s power-hungry, but it'€™s fast. And no matter how much we complain about it, we still use it. Because we don'€™t have anything better in the mainstream yet. We'€™ve looked at attempts to improve conven...