What defines the design of interfaces and where is the future of interfaces going?

Remember how it became easier to work with files when you got the ability to drag and drop them from one folder to another on your desktop? And, how fascinating was the release of the first mouse?

It’s all because of the illusion that you are leading with your hand in real space. That you’re dragging real objects.

In 2012 we gained a scientific explanation. Swedish scientist Nenrik Ehrsson discovered special neurons in our brain which are kept busy by calculating only the information we get around our arms. For instance, they act faster when it comes avoiding danger to our arms than usual optical neurons.

So, we can say that when we use our gestures "hand by hand" with our eyesight we are more efficient. Keep that in mind and add this to a new and very interesting technology Soli, based on our nature of using gestures.

In 2015 Ivan Poupyrev from the Google ATAP Project Soli team presented how new radar sensing technology can be used to precisely control computer interfaces without a tactile connection between the user and the computer.

I find this technology deeply fascinating — and we’ve only scratched the surface.

It’s why I decided to ask several experts about the Soli:

With this technology what do you think will change in UI/UX design?

Claudio: Project ATAP could finally be the cheap and accessible technology that would be integrated in everyday devices. That’s a first. In terms of changing our interaction this technology will bring a new dimension to it. From the current touch we could evolve to Air Hovers, gestures, etc. Therefore the possibilities would increase enormously. On the top of my head I imagine the simulation of directing a symphonic orchestra with it …or choosing a tone in a color picker with two hands.

Gia: Soli on the one hand is capable to recognize micro-gestures (wasn't possible before), on the other it can work through surfaces and it's really tiny. We can imagine this combination in smart-watch with physical crown replaced by Soli as shown in video, but without trying it's hard to say how comfortable it is in comparison.

Yury: Soli is not the first concept trying to expand our toolkit: we seen attempts to control a UI with an ultrasound, a cell signal, a muscle tension, a gaze, and a thought. Right before the Soli announcement, some guys presented a similar idea in Aria smartwatch clip. We can't predict whether they all will succeed or not, but I bet on Google pushing their idea really hard.

Who (devices, industries, markets, etc.) do you think it will benefit the most?

Yury: We'll definitely see it in the next Android Wear generation, as well as other smartwatches and smartbands, whether they have a screen or not. Disney MagicBand case study have made a lot of buzz recently, so we will see other companies following this idea soon, probably using something Soli.

Tiago: Gaming. Just train the software to read your highly unique individual array of gestures and inputs, and you're one step closer to interacting with your game console in a more comfortable fashion.

Consumer market - My first guess here would again be "accessibility". Think people with difficulty typing or holding a pen or moving a mouse. Think people who just don't want to sit at a desk when writing, or having to hold their phone with 8 of their fingers locked holding the device while their 2 thumbs do all the blunt work.

Claudio: This technology could bring a new interactive dimension to devices. Well used could build up to a more natural relationship with them. How could this benefit us? I think the more human our relationship with devices is, the more we can expand our skills beyond our senses. This is definitely a step forward.”

And of course everybody sees the interface challenge in this technology:

Alex: For instance this type of controller can obviously work with wide range of home automation devices, but this field is now still a wild wild west of experiments with no common interface and communication language. I'd give it several years at the very least before telling anything for sure.”

Anatoly: “I guess the most challenging here is the sensor's precision. If they'd be able to build the solid product which would actually work and recognize user's gestures with accuracy close to ideal, it would have every chance to become popular.”

Max: Usability issues.

  • ON/OFF. You need dead-fast on-off functionality for it. Not to interfere with your natural behavior.
  • Bad Hands. How they manage your not_clean hands (dirty/oily/etc.)? I mean when it makes your gestures work incorrectly.
  • User management. Multi-user management issues - who takes over the control / permissioning / many more here.
  • Device management. What device respond to gesture, how you manage switches between them?

In the end it's worse to say that no matter how many challenges we are facing the possibilities are endless. As Donald Hoffman said in his TED talk: “The idea is that evolution has given us an interface that hides reality and guides adaptive behavior. Space and time, as you perceive them right now, are your desktop. Physical objects are simply icons in that desktop.”

So maybe this technology will give us a new perception of the new reality, who knows. If you want to see more insights and thoughts from the experts read full version of the interview here.


Tatiana was a student on our Digital Media Management MA. Find out more about the MA here.