What is zero UI?

In simple terms, Zero UI refers to interfaces that require little to no direct interaction from users.

This design philosophy aims to use natural human interactions like voice, gestures, hand movements, or even thoughts to perform actions in a digital screen.

The goal of Zero UI is to create a more natural and intuitive user experience by minimizing the need for traditional graphical user interfaces (GUIs) or physical inputs like keyboards or touchscreens.

The term was coined by Fjord designer Andy Goodman who said, “Zero UI refers to a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment.”

“Looking to the future, the next big step will be for the very concept of the ‘device’ to go away.” – as said by Sundar Pichai, Google C.E.O

Evolution of UI and Movement Towards Zero UI

The first instance of what can be considered “UI” would be the text-based commands in the very early days of using computers. You would type in commands and perimeters which you had to memorize in order to use and it was very difficult and unintuitive to handle.

Then came graphical user interfaces (GUI), introduced by Xerox PARC in the 1970s. This was a breakthrough in modern-day computing whose evolved variation is still being utilized to this day. The GUI presents the user with an interface where various iconographies and visual metaphors are used to indicate desktops, recycle bins, and all sorts of files on a computer.

Computers (and other electronic devices) have long been using some variations of GUI. However, the next big change in UI design came with the introduction and mainstreamification of touchscreen mobile phones such as Apple and Android devices.

This allowed a lot of new natural interactions with interfaces like swiping, scrolling, and zooming. However, they all require you to interact with a screen. So attempts are being made at devices that try to eliminate the need for it.

Voice commands are now common and even popular consoles like the PS5 and Xbox have voice command features.

Other consoles like the Nintendo Wii and its predecessors along with Xbox Kinect have experimented with motion controls with varying degrees of success (the Wii being the most successful). VR technology is currently in its early days of development hoping to bring a whole new layer of interactivity.

Judging by the pattern, zero UI seems to be the natural next step in UI design. We are still very heavily reliant on GUI. However, technological industries have been experimenting with a lot of factors that can guide us to the next step of UI evolution.

Examples of Zero UI in Action

While zero UI is not the default UI we face, there have been a lot of devices that have become widely used and utilized. Here are some examples that can indicate to us the future we are heading toward.

Day-to-day life

A lot of common devices in our everyday life use zero UI, probably without even us realizing it. One of the most common examples of this would be Amazon Echo. It allows users to control devices and applications with voice commands. You can use it to set timers, control lights, fans, or plugs in your house, and interact with smart devices.

Smartwatches are also a common accessory nowadays that use voice and gesture-based commands to do tasks on mobile phones e.g. making phone calls, reading notifications, sending messages, and other things.

Health and wellbeing

The most crucial field that can benefit from zero UI is the medical field. It can automate the process of scheduled medicine intake, especially for patients who require insulin based on data and inputs. This means the patients don’t need to constantly monitor their medicine intakes, giving them a new sense of freedom and saving them from constantly worrying about taking medicine at the right time.

Non-obtrusive health monitoring devices are also a lifesaver for a lot of patients. Those who are at health risks can wear them like any other accessory and these devices check different things like heart rates, blood pressure, sleep patterns, etc. If they detect anything out of the ordinary that can be potentially harmful, they can alert the patient to take immediate measures for it.

These monitoring wearables can also help people make informed decisions in their day-to-day lives for a healthy lifestyle. They can make optimal diets, schedule regular exercises, fix sleep schedules, and many more proactive decisions. They can lead people to have more healthy lives away from decay and disease.

Entertainment industry

The entertainment industry, more specifically the gaming industry, has seen various aspects of Zero UI being implemented with mixed results.

One of the best examples of this would be the Nintendo Wii. There have been attempts at motion controls in gaming before, but the Wii has been by far the most successful console that had motion controls in its focus.

Motion controls allowed gamers to play the game with gestures and physical movements that replicated the character’s action on a screen, like swinging a sword or throwing a bowling bowl.

Following the commercial success of the Wii, Xbox and the PlayStation 2 also tried to get into motion controls with their own peripherals.

The most recent and successful dive into Zero UI can be said to be Virtual Reality (VR). VR usually uses a headset to immerse a user into a virtual world. From video games and chat spaces, users can interact with objects or characters within the virtual world naturally as they would in real life.

There have also been attempts to play games with only using someone’s mind. This may sound like something out of a Sci-fi setting, but Twitch streamer and psychology expert Perrikaryal recently demonstrated herself playing Elden Ring, one of the most notorious games for being hard, with only her mind. She used an EEG machine that detects brainwaves to input actions.

You can hear about her process and setup HERE.

Automotive industry

The automotive industry has been making steady progress in Zero UI, with a focus on self-driving cars. Companies like BMW and Tesla have been involved in commercially selling self-driving cars to users while companies like Waymo and Hyundai have been providing ride services with their cars. You can use an app to summon a driverless car that can take you from point A to point B.

It should be mentioned that these are currently used in a limited number of places and experts and manufacturers are currently working to make this a common thing in the future.

While self-driving cars are a noble concept, they can be seen as the final step in innovations regarding cars, at least for now.

On a smaller scale, zero UI can make the life of a driver easier by adjusting everything like in-car temperature, optimal settings, and music choices, along with many other things, and make them focus more on the road.

Zero UI Limitations

While zero UI sounds great on paper, certain limitations are holding it back. Some of them are:

Technological limitations

As much as potential Zero potential has to develop, the simple truth is that we are limited by the technologies of our time. All the examples we have listed sound nice, but all of them are known not to function as intended because the devices and AI cannot pick up on the subtleties of human voices and gestures.

Difference in language and culture

There is also the matter of different practices in different cultures. Even within one language, there are so many accents and variations that are in practice it is almost impossible to create a voice command device that can pick up on all of them.

The way a cowboy from Texas will not speak the same way as a football fan from Europe. This will cause people to pronounce in a manner that a voice command device can understand, which defeats the purpose of Zero UI, where IT should be able to understand the subtle differences by itself.

Inaccuracies in reading Movement

Devices that try to understand simple gestures are also known to misread what a user tries to convey. Obviously, the same gesture will not be the exact same among a lot of people. So people have to awkwardly try to “match” the required way to make a gesture, the opposite of natural human movements.

Imprecise input reading

We have mentioned motion controls previously as examples, but the truth is they are simply not as responsive as traditional control methods. Both the Nintendo Wii and Xbox Kinect were known for unresponsive controls which made players revert back to traditional forms of controllers anyway in the upcoming generations. It is perceived more as a gimmick than a new way of control.

These are all fine in the context of video games, where the worst-case scenario will be that imprecise inputs will lead to bad gaming experiences. This will not be okay in more serious and real-life scenarios, where the stakes will be higher. Until more precision is achieved, it cannot be used for more practical cases.

Concerns with self-driving cars

Self-driving cars are a neat concept, but they are far from being completely trustworthy. There are a lot of ethical issues surrounding self-driving cars, especially in situations considering accidents, where whether the AI and programs should prioritize saving the lives of the person inside the vehicle or anyone outside it. It can be said to be at an experimental stage, where only a limited number of cities are running driverless cars as services.

Expensive to implement

The projects that come closest to fulfilling the purpose of zero UI are VR headsets. The only problem is that they are very expensive to create, and cost a lot of money to buy. As a result, it is not as widely used as a medium to be able to innovate.

The Future of Zero UI

As much as we speculate that Zero UI will be the future of technological advancements, we are still a long way away from that. And as much as we aim to remove the concept of the device itself, they will never truly go away.

The concept of Zero UI is still in its very early stages and the available technology is not at the stage where it can make it the standard across the world.

However, as years go by, innovations will keep popping up, and with how fast machine learning AI is being developed, it won’t be long till Zero UI is used pretty much everywhere. We will be able to do common tasks like how Iron Man does in his movies.

Final thoughts

What is zero UI? Now you know the concept of Zero UI is fascinating and can be the way to the future. Despite the limitations of technology, it has shown us great potential for innovation in the field of UI and UX.

With promising concepts and potential innovations in the near future, there is no doubt we are headed towards a new horizon. When the time comes, everyone should grasp the opportunities zero UI gives us to build new things.

FAQs

Zero UI is not necessarily expensive to implement (although it can be), there are many specific things you will require that you may find very hard to acquire, let alone run all the necessary user tests. As a result, it can be a very difficult and time-consuming task, making it currently being attempted by large companies who have the funds to spare for experimentation.

While one cannot say exactly how long will it take for Zero UI to take over, depending on the current technology we have, it will take quite a number of years till that comes (if that day ever comes). Of course, if any innovation occurs, it can accelerate the development and mainstream spread of Zero UI.