3 Different ways of system control
3.1 System control via gestures

Currently, the most widely used input devices for human–computer communication are keyboard, mouse, or touch tablet. These devices are far from an idea of natural communication with a computer, and rather represent human adaptation to computer limitations. In the last few years a requirement began to pop up that humans need to communicate with machines in the same way as they do with each other: by speech, mimics or gestures, since they conceive much more information than peripheral devices approach.

Gestures are naturally transformed into our smartphones, tablets, computers, etc. Their mission is to make easy human-computer communication respectively control. Gesture can be touch or touchless but the main principles still the same.

Gestures can be divided into two basic categories by user experience.

  • Innate gestures are based on general experience of all users such as to move an object to the right by moving hand to the right, catch an object with closed fingers, etc. Naturally, the innate gestures can be affected by habits or culture. With the innate gestures there is no need for a user to study them in order to get good gesture experience, they just need to be showed to him.
  • The second category are learned gestures, which need to be learned. The gestures can also be divided into three categories based on the notion of motion.
    • Static gestures represent shapes created by gesturing limbs, which carry a meaningful information. The recognition of each gesture is ambiguous due to the occlusion of the limb’s shape and, on the higher level of recognition, the actual meaning of the gesture based on local cultural properties [1].
    • The second category, continuous gestures serve as a base for an application interaction where no specific pose is recognized, but a movement alone is used to recognize the meaning of a gesture [1].
    • Dynamic gestures consist of a specific, pre-defined movement of the gesturing limb. Such gesture is used to either manipulate an object, or to send out a control command [1].
image
Static gesture
image
Dynamic gesture

Using gestures for navigation and system control will provide Natural User Interface (NUI), which completely removes dependency on any mechanical devices like a keyboard or mouse. The key contributor to NUI is touch-less gesture control which allows manipulating virtual objects in a way similar to physical ones. NUI let users quickly immerse in the ‘new world’ – applications with master control with minimum learning, what is very important for AR/VR applications and ambient intelligence systems. In burgeoning applications like autonomous drone control and in-car infotainment navigation, NUI can greatly increase the usability [3].