posted by Bryon Moyer
CEVA has introduced a new vision platform, which they’re calling the CEVA-XM4. We’ve looked at their prior platform, the MM3101, before; you could consider this the next stage. Almost literally.
CEVA describes vision processing as resembling a 3-stage pipeline. First come your basic vision processing steps to generate clean 3D data, which creates left and right images and a depth map. The next step is what’s typically called computational photography: using sophisticated algorithms to create higher resolution and other quality improvements than a given camera is capable of generating on its own.
Both of these were covered in the prior vision processor; the XM4 further enables the third stage, what they call “visual perception.” This means object identification and tracking, for instance, as well as algorithms for augmented reality and so-called natural user interfaces (NUI – “natural” being something of a dodgy concept, like “intuitive”). Depending on the application, all three stages can be implemented in a single XM4 core; if more juice is needed, then multiple cores can be instantiated.
(Image courtesy CEVA)
From a camera standpoint, part of the idea here is that higher-level processing tends to be done in the cloud, which involves huge transfers of data from camera to cloud. Part of the intent of the XM4 is to beef up the camera so that much of that heavy lifting is done first in the camera, abstracting all that raw data and moving less up to the cloud.
But the XM4 isn’t just about still cameras; it’s also about automotive vision as well as incorporating vision into the IoT – video cameras and such whose purpose it is to identify specific artifacts to enable some kind of action to be taken. It could be a security camera or simply a home video camera that’s “always watching,” but films only when your kid is in the frame. (Which means it’s actually filming and processing, but then discarding if it doesn’t identify your child.)
(Image courtesy CEVA)
To some extent, this is just a beefy DSP. But there are a couple important steps they’ve taken for targeting vision. First is simply optimizing the instruction set. The second is to optimize how memory is managed. They illustrated a couple of examples.
In one case, they have built in the ability to perform scatter and gather in a single clock cycle. Most vector algorithms require that the memory to be processed be tidily arranged in adjacent cells; if the required cells are spread all over the place, then either you need to copy them to a scratchpad area to work on them, copying them back later, or you can’t vectorize the algorithm.
With a scatter-gather capability, they can handle this quickly, allowing vectorization of algorithms that would likely otherwise remain serial.
The other is what I think of as a windowing capability; they call it “2D processing.” Many vision algorithms involve a sliding window, with significant overlap between what’s contained in the window in one position and what’s contained after the window shifts one notch. They enable efficient reuse of the overlapping areas memory rather than requiring copies to scratch memory.
These capabilities largely come through pre-optimized library components; the designer then doesn’t have to think through the details of how they work; it’s already done for them (similarly to the SmartFrame feature we described in the past).
While these low-level processors can involve low-level programming, their Android Multimedia Platform allows programming at the Android level, with the framework connecting via the CPU to the vision processor.
You can learn more in their announcement.
posted by Dick Selwood
We work in an environment where we regularly say that we using technology to try to change the world for the better. Then the world turns around and shows you that it is far stronger than you think. It always faintly amuses me that a massive airliner can be seriously delayed by a head wind, but this ceases to be funny when the wind system is called Katrina, Sandy or the latest, Pam.
The world turned round and gave us another slap in the face last Thursday – not on the scale of Pam but still a nasty reminder that we are not yet anywhere close to being in control. Terry Pratchett died. It wasn't unexpected; he had been suffering from a rare form of Alzheimer's disease, posterior cortical atrophy, for several years. There is a cliché that people "battle" a disease. In Terry's case this was true. He fought with an immense rage both the disease and the laws that would not allow him to choose the time and place of his death and "…die, before the disease mounted its last attack, in my own home, in a chair on the lawn, with a brandy in my hand to wash down whatever modern version of the 'Brompton cocktail' some helpful medic could supply. And with Thomas Tallis on my iPod, I would shake hands with Death."
One of Terry's fascinations was technology. He was a voracious user of computers and the Internet, and many of the Discworld novels demonstrated the way in which society reacts to the impact of science and technology. This was at the heart of his last published work "Steam Up", written well after the disease had begun to grip him. In it the invention of the railway causes a wide range of reactions and political shenanigans, just as with the proposals for high speed railways in California and Britain.
Technology is paying tribute. Many websites will now contain "GNU Terry Pratchett". It is a very nerdy in-joke, and you can read about it here: http://i100.independent.co.uk/article/redditors-are-making-sure-terry-pratchetts-name-lives-on-forever--lJjYpijRag
We don't know what causes the various forms of Alzheimer's, we have difficulty diagnosing it, we can't cure it, and we don't even know how to alleviate it, except in minor ways for some forms of the disease. To quote Terry again "as far as we know the only way to be sure of not developing it is to die young".
There is work going on, around the world, on using technology to make early diagnosis of Alzheimer's. This will allow training for the sufferer to cope with some of the issues, but there is still a long way to go. For the rest of us, we have to acknowledge the world's power, but still continue working to try to make it a better place.
posted by Bryon Moyer
We used to be ok with the verification silos we grew up with. You’ve got your simulation guys over here helping with circuit and block verification. You’ve got your emulation group over there checking out larger system chunks or running software. In yet another corner, you’ve got your virtual platforms running software.
But really, there can be a lot of rework involved as an SoC migrates from being individual bits and pieces, individually tested, to a unified system, holistically tested. So a group at Accellera has formed to standardize a stimulus format so that verification intent and stimulus can be ported to different environments.
The scope here appears to be twofold. On the one hand, you’ve got different verification methodologies: simulation, emulation, etc. The different platforms may expect different inputs – even if just variations. On the other hand, this also appears to be about scale – blocks and components vs. complete systems.
One of the big differentiators at the system level is the use of software to test out the hardware platform. Note that this is different from using a virtual platform to test software: in that case, it’s the software that’s being tested with a “known good” platform model. The focus of this stimulus effort is more about verifying the platform itself; when software is used for that, then it’s the software that’s “known good.” So, of the silos I mentioned above, that last one seems unlikely to be affected. Then again, it’s different from the others, since it’s not about hardware verification.
Because the low-level stimulus details for, say, simulation will be different from that for software, this is more about capturing intent and verification scenarios for automated generation of the actual stimulus that makes its way into the test environment.
The first meeting just happened a week ago; if it’s an activity you’d like to be involved in, now’s a good time to jump in. Apparently a roadmap hasn’t yet been sketched out, so it’s still early days.
You can find more in their announcement.