DSP software developers have traditionally converted key performance-critical portions of their algorithms to assembly language because that was considered the only way to achieve high performance when using a DSP core. Every DSP architecture is different – optimized for a different type of data throughput challenge – and programmers need to understand each underlying DSP architecture in order to optimize the code manually using assembly coding techniques. Thus specialized knowledge is required to achieve effective results.
Assembly programming also locks code into a specific DSP platform by targeting that DSP’s specific instruction set architecture (ISA). The company loses flexibility in choosing cores for future projects that need to reuse code.
Most developers use C to quickly create and test software. Why haven’t they been staying in C, the language that the algorithm was probably developed in?
The answer is simple. Most C compilers cannot efficiently map algorithms to DSP instruction sets aimed at accelerating targeted algorithms. If the amount of code that needs precise tuning is small, assembly coding can be an acceptable solution. But as application programs have become larger and more complex and as the number of industry standards has multiplied over the years, the need for a purely C-based solution has escalated.
How can the usage paradigm move from an assembly-level to a C-code level? This white paper examines the most common first step in that evolution – the use of C intrinsics. The it discusses the requirements of a truly modern compiler that can offer multiple parallel execution routes for parallelism whatever the algorithm. Finally it discusses what a totally C-based design flow might look like.