feature article
Subscribe Now

Unconventional Touch

Funny how we have dual expectations when it comes to our hands. Give us a workshop and some tools, and we’ll craft a finely-honed wooden object using pressure to sand, gingerly holding nails in place so as not to end up with blackened thumbnails, and convolving fine finger movements with the delicate response of tiny brush bristles to paint intricate detail. Give us paper and a pen or brush, and we’ll create nuanced calligraphy that pleases and informs. Give a skilled dentist some scary-looking steel tools, and he or she will place them – hopefully – exactly where needed to manipulate our various mouthparts, mitigating collateral discomfort.

But give us a phone? We’re excited that we get to swipe.

When it comes to what we expect of our digits, we have one standard for the real world and a very different one for the digital world. Which suggests that there’s a lot more that we can exploit when it comes to how we operate our new toys and tools. We are, at present, literally almost all thumbs when it comes to touch interaction, although multi-touch progress will at least allow us to use multiple fingers (even if the system treats them all as thumbs).

At the recent Touch Gesture Motion conference, CMU’s Chris Harrison presented a few of the many projects he’s involved in. It’s unlikely that these will all be commercialized, but it’s nice to pull one’s head out of the exigencies of the practical in order to catch a glimpse of what might be possible sometime in the future.

He and his teams have documented their projects pretty well, with videos accompanying many of them to provide that “No way!” feeling. I decided to summarize the ones he presented at the conference – which is a small subset of the projects up on his website. Unlike many flights of fancy, these are backed up by proofs of concept, even though there’s much work remaining.

There are two directions that he explores in allowing us to extract more control out of our motions. One is by increasing the richness of information that our fingers can express. I think of this as adding dimensions or degrees of freedom to the ones we have now: location (2D) and, in some cases, pressure. The other direction looks for ways to increase the area we have available for touching – phone screens are notoriously small, and our fingers aren’t shrinking.

Three projects are aimed at getting more from our current screens. The first leverages shear – the lateral forces on our fingers as we touch things and move along a surface. It’s not the movement per se – we can already track that. It’s kind of like pressure, except that, in our context, pressure is only in the Z direction; this is “pressure” in the X and Y directions.

He has identified five different use cases for shear input.

  • Gestures: cutting and pasting text; controlling phone volume; making a “V” to get to voicemail or an “S” to silence the phone.
  • “In-situ Continuous Manipulation and Control”: this is the ability to modulate some activity based on how hard you push your finger to the side. It could be scrolling speed, zoom, the characteristics of something you’re touching (like the saturation or brightness of a photo), or navigation along a path that would require several page changes. These can all be done without moving or lifting a finger.
  • Fine control: this relates to the “gain” between your movement and that of whatever you’re controlling. It’s easy to drag an unused app into the trash, but working with detailed artwork, nudging things into place precisely, is not so easy. Pressing your fingers to the side could trigger such delicate movements.
  • Summoning menus: when contextually appropriate, using shear over an object could present a menu of options for that object.
  • Something he calls “Alt Drag”: this is an alternative to the right mouse button. So a “soft” drag might create a freeform line, but a “hard” drag would create a straight line. Or a soft drag would move an object; a hard drag would “tear off” a copy of the object.

The next project is actually being commercialized at a spin-off from CMU called Qeexo (there’s no pronunciation guide on their site, so I’m assuming that’s “Keek-so” and not “Kweek-so”…). The technology is called FingerSense, and it’s used to figure out what thing is touching the screen based on its acoustic response. You might be impressed with an ability to distinguish skin from wood from metal, but how about finger from fingernail from knuckle? Yeah, it’s like that.

Finally, there’s a project called Capacitive Fingerprinting. This isn’t so much about providing input as much as it is about figuring out who’s doing the touching. This works by detecting the impedance profile of different users. It’s of obvious use in a two-person game, for instance, so the device can tell which player is doing the touching.

The effect isn’t permanent: if you “log on” and play for a while and then come back after several hours, it may no longer recognize you. It’s not clear what changes, but it could be affected by the environment or your mood or the fact that you finally managed to digest that monster burger you had for lunch. Whatever the reason, this is therefore not a security tool; it can’t identify you uniquely forever. It simply knows that, for now, you are dude 1 and the other person is dude 2.

When it comes to increasing the effective area on which you can provide input, he touched on five different projects of varying improbability. The first dealt with making use of a feature that many devices have but that never gets used for anything except attaching one thing (typically your device) to something else (typically the power outlet): it’s the cord.

A cord provides several independent degrees of freedom, some of which they’ve tested out. These include the touch location (theoretically 2D, although if the cord is really small and round, it would seem more like a 1D touch – or perhaps 1½D), twist, bend, and pull. This could give Bluetooth a run for its money in the creepy category: Is that modern-day Captain Queeg merely fiddling desperately with his power cord as the prosecutor closes in on him, or is he quietly sending a Buy order to his broker?

Another project called Abracadabra detects changes in the surrounding magnetic field so that small devices can be kept small rather than being sized up just to accommodate our hammy hands. This is effectively magnetic gesture recognition.

“Scratch input” allows one to turn any textured surface into an input device. That could be the table, the wall, the carpet, whatever. It’s also an acoustic thing, with a sensitive mike detecting the nature of the motion.

“Skinput” turns your body into the touch surface. While they don’t model body mechanics explicitly to make this work, they rely on the fact that touching your skin creates transverse surface waves and longitudinal internal waves that propagate through muscle and bone, being filtered by the materials and – especially – by joints and organs. This lets them determine where you touched, turning palms, arms, elbows – heck, theoretically, anything – into a touch pad. Not even gonna touch the creepy side of this.

Finally, “Omnitouch” is a projector-and-camera setup that can turn absolutely any surface into a touch pad. It uses a pico-projector to display the “screen” on whatever surface is handy. The camera detects the surface, the image, and your fingers. The fact that it has X, Y, and Z information about both your fingers and the scene allows it not only to determine the location, but also to decide whether your finger is hovering over an object or has actually “clicked” the object.

Of course, once all of this is possible, we have to design the best interfaces. We could do all kinds of things if we were good at memorizing long lists of gesture combinations. But we’re not: we learn a few things (a swipe is a learned way to unlock a phone; there’s nothing obvious about it), and some things are culturally intuitive. Anything beyond that is like learning Unix (although admittedly without being intentionally obtuse).

So it’s cool to see what we can do. It will be even cooler to see what we do  do.

 

More info:

All of these projects and many more can be found on Mr. Harrison’s website.

One thought on “Unconventional Touch”

Leave a Reply

featured blogs
Apr 18, 2021
https://youtu.be/afv9_fRCrq8 Made at Target Oakridge (camera Ziyue Zhang) Monday: "Targeting" the Open Compute Project Tuesday: NUMECA, Computational Fluid Dynamics...and the America's... [[ Click on the title to access the full blog on the Cadence Community s...
Apr 16, 2021
Spring is in the air and summer is just around the corner. It is time to get out the Old Farmers Almanac and check on the planting schedule as you plan out your garden.  If you are unfamiliar with a Farmers Almanac, it is a publication containing weather forecasts, plantin...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

Understanding the Foundations of Quiescent Current in Linear Power Systems

Sponsored by Texas Instruments

Minimizing power consumption is an important design consideration, especially in battery-powered systems that utilize linear regulators or low-dropout regulators (LDOs). Read this new whitepaper to learn the fundamentals of IQ in linear-power systems, how to predict behavior in dropout conditions, and maintain minimal disturbance during the load transient response.

Click here to download the whitepaper

featured chalk talk

Silicon Lifecycle Management (SLM)

Sponsored by Synopsys

Wouldn’t it be great if we could keep on analyzing our IC designs once they are in the field? After all, simulation and lab measurements can never tell the whole story of how devices will behave in real-world use. In this episode of Chalk Talk, Amelia Dalton chats with Randy Fish of Synopsys about gaining better insight into IC designs through the use of embedded monitors and sensors, and how we can enable a range of new optimizations throughout the lifecycle of our designs.

Click here for more information about Silicon Lifecycle Management Platform