Almost exactly a year ago, we posted about how Ashutosh Saxena’s lab at Cornell was teaching robots to use their “imaginations” to try to picture how a human would want a room organized. The research was successful, with algorithms that used hallucinated humans (which are the best sort of humans) to influence the placement of objects performing significantly better than other methods. Cool stuff indeed, and now comes the next step: labeling 3D point-clouds obtained from RGB-D sensors by leveraging contextual hallucinated people.
A significant amount of research has been done investigating the relationships between objects and other objects. It’s called semantic mapping, and it’s very valuable in giving robots what we’d call things like “intuition” or “common sense.” However, being humans, we tend to live human-centered lives, and that means that the majority of our stuff tends to be human-centered too, and keeping this in mind can help to put objects in context.
via IEEE Spectrum
June 20, 2013
featured blogs
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...
Apr 30, 2024
Analog IC design engineers need breakthrough technologies & chip design tools to solve modern challenges; learn more from our analog design panel at SNUG 2024.The post Why Analog Design Challenges Need Breakthrough Technologies appeared first on Chip Design....