
Robots that make maps tend to be highly reliant on vision of one sort or another, whether it’s a camera image or something off the end of the visible spectrum like a laser scanner. This is understandable: humans are adapted to use vision, so we understand it pretty well, and we can get a lot of useful information out of a visual image. Animals, on the other hand, take advantage of a much broader suite of senses, specialized for their environments. If you only come out at night, or if you live in a hole, vision is perhaps not the best solution for you, and a robot modeled after a shrew can now make maps using just tactile feedback from a prodigious set of artificial whiskers…
New research presented at the IEEE International Conference on Robotics and Automation (ICRA) this week has Shrewbot performing what the researchers are calling tSLAM, which is tactile Simultaneous Localization and Mapping. The robot has an array of 18 individually-actuated whiskers mounted on a 3 degree-of-freedom neck, attached to an omni-drive mobile platform. Using a combination of wheel odometry and detection by whisking (the behavior really is called whisking), Shrewbot is able to gradually make a tactile map of an area by combining hundreds (or thousands) of whisk contacts that it feels when it encounters walls or other obstacles.
via IEEE Spectrum


