feature article
Subscribe Now

Unconventional Touch

Funny how we have dual expectations when it comes to our hands. Give us a workshop and some tools, and we’ll craft a finely-honed wooden object using pressure to sand, gingerly holding nails in place so as not to end up with blackened thumbnails, and convolving fine finger movements with the delicate response of tiny brush bristles to paint intricate detail. Give us paper and a pen or brush, and we’ll create nuanced calligraphy that pleases and informs. Give a skilled dentist some scary-looking steel tools, and he or she will place them – hopefully – exactly where needed to manipulate our various mouthparts, mitigating collateral discomfort.

But give us a phone? We’re excited that we get to swipe.

When it comes to what we expect of our digits, we have one standard for the real world and a very different one for the digital world. Which suggests that there’s a lot more that we can exploit when it comes to how we operate our new toys and tools. We are, at present, literally almost all thumbs when it comes to touch interaction, although multi-touch progress will at least allow us to use multiple fingers (even if the system treats them all as thumbs).

At the recent Touch Gesture Motion conference, CMU’s Chris Harrison presented a few of the many projects he’s involved in. It’s unlikely that these will all be commercialized, but it’s nice to pull one’s head out of the exigencies of the practical in order to catch a glimpse of what might be possible sometime in the future.

He and his teams have documented their projects pretty well, with videos accompanying many of them to provide that “No way!” feeling. I decided to summarize the ones he presented at the conference – which is a small subset of the projects up on his website. Unlike many flights of fancy, these are backed up by proofs of concept, even though there’s much work remaining.

There are two directions that he explores in allowing us to extract more control out of our motions. One is by increasing the richness of information that our fingers can express. I think of this as adding dimensions or degrees of freedom to the ones we have now: location (2D) and, in some cases, pressure. The other direction looks for ways to increase the area we have available for touching – phone screens are notoriously small, and our fingers aren’t shrinking.

Three projects are aimed at getting more from our current screens. The first leverages shear – the lateral forces on our fingers as we touch things and move along a surface. It’s not the movement per se – we can already track that. It’s kind of like pressure, except that, in our context, pressure is only in the Z direction; this is “pressure” in the X and Y directions.

He has identified five different use cases for shear input.

  • Gestures: cutting and pasting text; controlling phone volume; making a “V” to get to voicemail or an “S” to silence the phone.
  • “In-situ Continuous Manipulation and Control”: this is the ability to modulate some activity based on how hard you push your finger to the side. It could be scrolling speed, zoom, the characteristics of something you’re touching (like the saturation or brightness of a photo), or navigation along a path that would require several page changes. These can all be done without moving or lifting a finger.
  • Fine control: this relates to the “gain” between your movement and that of whatever you’re controlling. It’s easy to drag an unused app into the trash, but working with detailed artwork, nudging things into place precisely, is not so easy. Pressing your fingers to the side could trigger such delicate movements.
  • Summoning menus: when contextually appropriate, using shear over an object could present a menu of options for that object.
  • Something he calls “Alt Drag”: this is an alternative to the right mouse button. So a “soft” drag might create a freeform line, but a “hard” drag would create a straight line. Or a soft drag would move an object; a hard drag would “tear off” a copy of the object.

The next project is actually being commercialized at a spin-off from CMU called Qeexo (there’s no pronunciation guide on their site, so I’m assuming that’s “Keek-so” and not “Kweek-so”…). The technology is called FingerSense, and it’s used to figure out what thing is touching the screen based on its acoustic response. You might be impressed with an ability to distinguish skin from wood from metal, but how about finger from fingernail from knuckle? Yeah, it’s like that.

Finally, there’s a project called Capacitive Fingerprinting. This isn’t so much about providing input as much as it is about figuring out who’s doing the touching. This works by detecting the impedance profile of different users. It’s of obvious use in a two-person game, for instance, so the device can tell which player is doing the touching.

The effect isn’t permanent: if you “log on” and play for a while and then come back after several hours, it may no longer recognize you. It’s not clear what changes, but it could be affected by the environment or your mood or the fact that you finally managed to digest that monster burger you had for lunch. Whatever the reason, this is therefore not a security tool; it can’t identify you uniquely forever. It simply knows that, for now, you are dude 1 and the other person is dude 2.

When it comes to increasing the effective area on which you can provide input, he touched on five different projects of varying improbability. The first dealt with making use of a feature that many devices have but that never gets used for anything except attaching one thing (typically your device) to something else (typically the power outlet): it’s the cord.

A cord provides several independent degrees of freedom, some of which they’ve tested out. These include the touch location (theoretically 2D, although if the cord is really small and round, it would seem more like a 1D touch – or perhaps 1½D), twist, bend, and pull. This could give Bluetooth a run for its money in the creepy category: Is that modern-day Captain Queeg merely fiddling desperately with his power cord as the prosecutor closes in on him, or is he quietly sending a Buy order to his broker?

Another project called Abracadabra detects changes in the surrounding magnetic field so that small devices can be kept small rather than being sized up just to accommodate our hammy hands. This is effectively magnetic gesture recognition.

“Scratch input” allows one to turn any textured surface into an input device. That could be the table, the wall, the carpet, whatever. It’s also an acoustic thing, with a sensitive mike detecting the nature of the motion.

“Skinput” turns your body into the touch surface. While they don’t model body mechanics explicitly to make this work, they rely on the fact that touching your skin creates transverse surface waves and longitudinal internal waves that propagate through muscle and bone, being filtered by the materials and – especially – by joints and organs. This lets them determine where you touched, turning palms, arms, elbows – heck, theoretically, anything – into a touch pad. Not even gonna touch the creepy side of this.

Finally, “Omnitouch” is a projector-and-camera setup that can turn absolutely any surface into a touch pad. It uses a pico-projector to display the “screen” on whatever surface is handy. The camera detects the surface, the image, and your fingers. The fact that it has X, Y, and Z information about both your fingers and the scene allows it not only to determine the location, but also to decide whether your finger is hovering over an object or has actually “clicked” the object.

Of course, once all of this is possible, we have to design the best interfaces. We could do all kinds of things if we were good at memorizing long lists of gesture combinations. But we’re not: we learn a few things (a swipe is a learned way to unlock a phone; there’s nothing obvious about it), and some things are culturally intuitive. Anything beyond that is like learning Unix (although admittedly without being intentionally obtuse).

So it’s cool to see what we can do. It will be even cooler to see what we do  do.

 

More info:

All of these projects and many more can be found on Mr. Harrison’s website.

One thought on “Unconventional Touch”

Leave a Reply

featured blogs
Oct 21, 2020
We'€™re concluding the Online Training Deep Dive blog series, which has been taking the top 15 Online Training courses among students and professors and breaking them down into their different... [[ Click on the title to access the full blog on the Cadence Community site. ...
Oct 20, 2020
In 2020, mobile traffic has skyrocketed everywhere as our planet battles a pandemic. Samtec.com saw nearly double the mobile traffic in the first two quarters than it normally sees. While these levels have dropped off from their peaks in the spring, they have not returned to ...
Oct 19, 2020
Have you ever wondered if there may another world hidden behind the facade of the one we know and love? If so, would you like to go there for a visit?...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

featured video

Better PPA with Innovus Mixed Placer Technology – Gigaplace XL

Sponsored by Cadence Design Systems

With the increase of on-chip storage elements, it has become extremely time consuming to come up with an optimized floorplan with manual methods. Innovus Implementation’s advanced multi-objective placement technology, GigaPlace XL, provides automation to optimize at scale, concurrent placement of macros, and standard cells for multiple objectives like timing, wirelength, congestion, and power. This technology provides an innovative way to address design productivity along with design quality improvements reducing weeks of manual floorplan time down to a few hours.

Click here for more information about Innovus Implementation System

Featured Paper

The Cryptography Handbook

Sponsored by Maxim Integrated

The Cryptography Handbook is designed to be a quick study guide for a product development engineer, taking an engineering rather than theoretical approach. In this series, we start with a general overview and then define the characteristics of a secure cryptographic system. We then describe various cryptographic concepts and provide an implementation-centric explanation of physically unclonable function (PUF) technology. We hope that this approach will give the busy engineer a quick understanding of the basic concepts of cryptography and provide a relatively fast way to integrate security in his/her design.

Click here to download the whitepaper

Featured Chalk Talk

Smart Embedded Vision with PolarFire FPGAs

Sponsored by Mouser Electronics and Microchip

In embedded vision applications, doing AI inference at the edge is often required in order to meet performance and latency demands. But, AI inference requires massive computing power, which can exceed our overall power budget. In this episode of Chalk Talk, Amelia Dalton talks to Avery Williams of Microchip about using FPGAs to get the machine vision performance you need, without blowing your power, form factor, and thermal requirements.

More information about Microsemi / Microchip PolarFire FPGA Video & Imaging Kit