One of the greatest drawbacks of wider use of natural user interfaces is their lack of usability and human-centered design. While other modalities (i.e. voice command navigation) seem to adapt rather quickly, gesture recognition still cannot deliver truly natural experience, especially on touch-less devices. There are several factors that determine intuitiveness and naturalness of gesture recognition. Firstly, it is the hardware limitations that limit sensor algorithms to recognize more specific details in gesture performance. This causes gestures to be recognized incorrectly and force users to perform gestures that require plenty of effort and lack comfort. Secondly, gesture sets currently proposed in touch-less systems aren’t inherently intuitive. System designers tend to overcome sensor limitations by introducing gestures that are easily recognizable but are often far from simple [2].
Apple TV
Apple TV uses its remote controller to catch gestures. Remote’s touch surface detects a variety of intuitive, single-finger gestures. There are three types of gestures.
Swipe. Moves focus up, down, left, or right between items. Swiping lets the user scroll effortlessly through large volumes of content with movement that starts fast and then slows down, based on the strength of the swipe.
Click. Activates a control or selects an item. Clicking is the primary way of triggering actions. Clicking and holding is sometimes used to trigger context-specific actions. For example, clicking and holding an interface element may enter an edit mode.
Tap. Navigates through a collection of items one-by-one. In apps with standard interfaces based on UIKit, tapping different regions navigates directionally. For example, tapping the top of the touch surface navigates up. Some apps use tap gestures to display hidden controls.
Differentiate between click and tap, and avoid triggering actions on inadvertent taps. Clicking is a very intentional action, and is generally well-suited for pressing a button, confirming a selection, and initiating an action during gameplay. Tap gestures are fine for navigation or showing additional information, but keep in mind that the user may naturally rest a thumb on the remote, pick it up, move it around, or hand it to someone.
SingleCue
The original Singlecue launched in late 2014 and offered an early glimpse of what could be possible with gesture control. About the size of an Xbox Kinect, that device worked via infrared, and allowed you to turn on a device with a wave, quiet the volume by putting your finger to your lips, or switch between devices with a variety of other gestures.
The second-generation Singlecue builds upon that work by adding new gestures such as a wave of the hand, a pinch of a finger, and a palm click to make the device even more useful to the end user.
For example, playing and pausing video can now be controlled by opening and closing your hand, while volume can be controlled by moving a pinched finger from left to right. These gestures would work at any time, meaning the user wouldn’t necessarily need to be in a specific menu to access that functionality.
Samsung TV / LG TV
Samsung TV uses a simple gesture control to access your favourite movies, sports, apps and other Smart Content in Samsung Smart TV. Samsung uses a simple camera to monitor an environment ahead.
User can forget the remote and use your hands to control TV functions by swiping to navigate and grabbing to select. It's as smart as it is easy. Using Motion Control to change the channel, adjust the volume, move the pointer, and control other TV functions. Supported gestures are swipe, zoom, like, grab.
This set of gestures enable basic control of Smart TV. Motion may be limited by: