Way back in the mists of time, when I was bright-eyed and bushy-tailed, as I penned the first words in this quest to explore the topic of switch bounce and debounce, I actually thought it would take only one, perhaps two, columns. Sad to relate, as the days grew into weeks and the weeks grew into months, I grew into a much sadder and wiser man.
I started working in high-level synthesis (HLS) in 1994 which, assuming my math is correct, was 26 years ago. In those early days, we referred to the technology as “behavioral synthesis” because it relied on analyzing the desired behavior of a circuit in order to create a structural description rather than simply converting from a higher-level structural description to a lower-level one.
Actionable intelligence and inference at the edge takes center stage in this week’s Fish Fry podcast. First up, we take a closer look at how drones can be taught to echolocate (like bats and dolphins) with a little help from a speaker, four microphones and a whole lot of math. Next, Nigel Forrester (Concurrent Technologies) and I chat about radio frequency signal intelligence, the benefits of a heterogeneous … Read More → "Inventing Actionable Intelligence"
A few years ago, we all thought we were on the cusp of “enough” bandwidth. If everybody with a smartphone could stream HD video simultaneously, what else could we even want? Surely an infrastructure that could handle that task would be ready for whatever else we wanted to throw at it.
Oh, how wrong we were.
We’ve elaborated at length in these pages on the remarkable capabilities FPGAs bring to the world of embedded vision and vision analytics. With enormous numbers of new applications demanding embedded vision capabilities at the endpoint and edge, there’s a huge green field of opportunity out there, just waiting for FPGAs to take over. The ability of FPGAs to accelerate AI tasks on a miserly … Read More → "Lattice mVision Stack"