News & Views

Most Contagious NYC / Julia Schwarz

by Contagious Team

In just over six years touch has become mainstream. Where does it go next? That's the question Schwarz is working on, and the path she'll chart at Most Contagious.

Julia Schwarz, co-founder of Qeexo and a member of the Phd program at Carnegie Mellon University's Human Computer Interaction Institute, is obsessed with touch. Since her undergraduate days at University of Washington she's been designing systems, writing algorithms and developing hardware to help people interact with computers in more efficient ways. 

We are extremely excited to have Julia joining us at Most Contagious in New York on December 11 to share her views on the evolution of haptics and the opportunities developing in connected experiences. 

To find out more about Most Contagious or to purchase tickets, email arianna@contagious.com.

My main research is in building systems and handling user input. I build tools to help people build apps, tools aimed at helping developers deal with uncertain information. Nowadays, a lot of our input -- free space gesture, touch, voice, things like that -- there's a lot of uncertainty around what the user is trying to tell the computer. Developers try to make the best guess, and they're not always right. 

For instance, in iOS 7, I've seen people bring up the Notification Center when they were just scrolling. In a thing as carefully designed as iOS 7, there's a lot of ambiguity. Developers try to design with the best information they have, but we're trying to help them with better data to understand user intention. If you've been scrolling for ten seconds your intention is to keep scrolling, not bring up the Notification Center. 

It sounds like the simple touch data interpretation would get even more complex in a free space scenario.

It's really difficult there. I've done a little bit on this, especially with the Kinect in a living room. You don't know if you're trying to grab the chips from the table or interact with the computer. Now, in free space, you have to wave at the Kinect to activate it. How can we look at what someone's been doing previously and learn from that? If you were previously interacting and you took a break, it's likely you'd want to interact again. 

Kinect looks at your hand and elbow, if you're waving, and makes the decisions based on that. The research systems look at whether you're looking at the Kinect, which is a much better cue than a wave. 

They think of humans as arms or floating heads, I look at the entire body. Does their pose look engaged? People that are more engaged are leaning forward, for example, and you can use that as a cue. The other thing you can do is the time component. Where were they for the last few minutes? What was space that their body took up? If they go out of that space, they're trying to do something. They might lean forward and bring their hand up. 

How is Qeexo taking the stuff you worked on at CMU forward? 

The broader idea of Qeexo is that we're bringing machine learning and artificial intelligence into human-computer interaction. One way is this notion of rich touch and allowing multitouch to do new things. Because we're using things like machine learning to detect whether we're using a finger pad or a knuckle, there's always uncertainty. 

That's what I've spent four years on: figuring out how to use those things so that interaction is correct. A lot of that has been in improving the recognition. Ultimately you just really want to have a good sensor. We're focusing on the sensor now, and later we can bring in all these other things that use previous actions, for example. If your signal is really good, you don't need to disambiguate. I think the best way to improve accuracy is to make the hardware better. It's kind of a balancing act. 

Is Qeexo app-specific or device specific, i.e. part of an OS or device hardware? How does it work? 

We actually have a version that works on a stock phone, but it's better if you can add an additional sensor that's coupled with the screen. We listen to the sound the screen makes when you tap on it. So we're working with OEMs to make small modifications and integrate with core components of the phone. Our main innovation is in the software. Our biggest innovation is in the algorithms, given the sound the screen makes.

Contagious: How quickly do you find people get used to the ability to use all those different touch modes in applications? 

It's very easy to do, it's not like a Kinect-style gesture where it's difficult to push a button. The hurdle is learnability: how do you make people aware of this gesture? I see two major uses for it. One is as a secondary tap. I see it as replacing tap and hold. I see the knuckle tap as easier to do, and immediate. With the tap and hold, for example, sometimes the timer doesn't trigger, and you don't know whether you've got the hold. But with the knuckle, you just tap once. I see it as anything you do as tap and hold nowadays, the knuckle will replace it. 

Second, I like this idea of a double-knuckle tap, a 'knock-knock', to give discrete commands. When I'm drawing, and I want to just clear the screen quickly, instead of just going to the menu, I just knuckle tap twice. 

So that could function almost as a secret handshake, where a unique pattern creates user-specific shortcuts?

It's basically another input type. So along the security edge we have a demo where instead of just swiping you can do a cute rhythmic thing, knock knock tap tap or knock knock swipe swipe. 

Contagious: What are your gripes about haptics in general, that you think can be made better?

I have two big ones: One is the fat finger problem. It's super annoying that you have to make all your targets big. It's silly. If I have one button that has nothing around it, if I press near it, it should get selected. it's a little silly we have to be so precise. People are still thinking our fingers are precise pointers, like a cursor, but the input is fundamentally different. 

The other big qualm I have is around this idea of the Midas touch. Every time you put your hand down, the touchscreen thinks it's an intentional touch. When I want to draw I can't rest my palm on the screen, because it's going to think wherever you put your palm down, it's a pen stroke. I've done work to mitigate this, but no one's figured it out. Which touches are intentional and which touches are accidental? If I have an interface that reacts to how I grasp an object, what if I just want to pick it up, rather than interact with it, how does it understand intentionality? How do we get these objects to figure out when to ignore us and when to pay attention to us? 


Julia is speaking at Most Contagious in New York on December 11 at the Times Center. 
Purchase tickets via Eventbrite,or if you are a Contagious Feed or Magazine subscriber, contact arianna@contagious.com to take advantage of your discount.

Most Contagious will also be taking place in London on the same day, where Disney Research's, Ivan Poupyrev will be taking on the same topic. Read an interview with him from our archives.
 
For more information on the London event, or to book tickets, email arianna@contagious.com.