posted by Dick Selwood
Raspberry Pi has been one of those events that leave you breathless. You will recall that the board was designed as a teaching aid, to get people interested in building systems. However, according to the Raspberry Pi Foundation a fair number of the over seven million boards that have been sold have been used in commercial projects. The feed-back they are getting is that while the standard board is great for getting prototypes up and working, for volume productions, for volume production, there was a need for a more flexible approach.
To meet this Element 14 is now offering, worldwide, a Raspberry Pi customisation service. You will be able to choose things like board layout, add or remove interfaces , headers and connectors and make changes to the memory. It all looks rather fun.
posted by Bryon Moyer
Today we’ve put up a piece on designing audio subsystems, but there’s more news than that in the audio world. If you read our earlier piece on QuickLogic’s EOS device, and if you were paying attention to details, you might recall a quick mention of a company called Sensory that had partnered with QuickLogic for audio algorithms. Sensory subsequently released a product called TrulyHandsFree, and I connected with them to find out more about who they are and what they do.
In fact, they’ve been at this game for 21 years, so they’re not newcomers. They even sold (and still sell) neural-network-based chips with their algorithms, but their current focus is the algorithms themselves, sold as IP. In fact, they have both software and hardware IP (the latter of which featured on the QuickLogic part).
One of their important applications is biometric authentication: using voice as a security mechanism. It’s mostly for verification – given examples of authorized personnel, confirming by your voice that you’re who you say you are. They can also do some limited identification – that is, listening to your voice and coming up with who you are without your giving them any hints as to who you are. If they have, like, 10 people or so to choose from, they can do this. If they have to identify someone amongst thousands, though, they’re not there. (Yet, anyway.)
They’ve got three levels of product:
- TrulyHandsFree: this is for low-end consumer products, requiring the least resources to get the job done. Low power, small footprint, always on. Small vocabulary, used for command and control. This is what was incorporated into the QuickLogic part.
- TrulyNatural: This includes state-of-the-art algorithms for higher-end consumer devices like phones. Can handle a large vocabulary and continuous speech.
- TrulySecure: this combines audio with video for authentication.
In general, authentication happens through a passphrase (ignoring the video in the last product). It can be a fixed passphrase, but that runs the risk that someone records the authorized person saying the passphrase and then replays it to fool the authentication. It’s better if the system issues random passphrases for the supplicant to utter. Then no one knows ahead of time exactly what will be required to pass.
Of course, with anything like this, you have to deal with false accepts (unauthorized person gets through) and false rejects (authorized person can’t get through). They actually have a dial that lets them set these rates, and the best balance will depend on the application, weighing the risk of unauthorized entry to the inconvenience (or worse) of not being able to get into your own system. There are no testing standards for this. They always assume that the user has done a reasonable training job, and they then look across a variety of noise and environmental conditions that might affect how the sound is perceived by the algorithms.
Of course, with small devices, the challenge is power, since you need this system always to be on. They say that, on average, TrulyHandsFree uses about 1 mA of current. Sound detection requires less than 1 MIPS and runs a couple hundred microamps or less. Once triggered, the recognition part runs 1.5 – 2.5 mA. Processing is staged, with each level ramping up as the prior level directs.
(Image courtesy Sensory)
They do as much processing locally as possible – for example, having a wearable work with a phone to do this if there’s not enough oomph in the wearable. That keeps things working even when there’s no connection, and it’s better for privacy. They can escalate to the cloud for more horsepower if necessary, which works particularly well if the thing being requested requires cloud access anyway.
Their latest announcement has them adding deep learning capabilities to their TrulyHandsFree product. They say that this increases their word accuracy by up to 80% while shrinking the size of their acoustic models by a factor of 10. This also lowers their power consumption to the levels discussed above. You can read more in their announcement.
posted by Dick Selwood
Today (November 2nd) the International Telecommunication Union’s World Radiocommunication Conference meets in Geneva. It will run until the 27th and, apparently, during much of that time, there will be bitter arguments about the leap second. As we discussed in Just a Second http://www.eejournal.com/archives/articles/20150924-justasecond/ a few weeks ago, now that we measure the second using atomic frequencies, it is clear that the earth doesn't rotate evenly, so, at relatively unpredictable times, a leap second is declared. This requires the administrators of a range of systems to tell the system to add the extra second. The first time a leap second was declared, there were serious problems. Supporters of the leap second claim that the problem is well understood and manageable, but opponents argue that now we don't rely on the sun and stars for navigation, and use artificial satellites and the internet to share time signals, there is no need to take any risks. The arguments are summarised in IEEE Spectrum. http://spectrum.ieee.org/tech-talk/computing/networks/leap-second-heads-into-fierce-debate