Currently, the most widely used input devices for human–computer communication are keyboard, mouse, or touch tablet. These devices are far from an idea of natural communication with a computer, and rather represent human adaptation to computer limitations. In the last few years a requirement began to pop up that humans need to communicate with machines in the same way as they do with each other: by speech, mimics or gestures, since they conceive much more information than peripheral devices approach.
Gestures are naturally transformed into our smartphones, tablets, computers, etc. Their mission is to make easy human-computer communication respectively control. Gesture can be touch or touchless but the main principles still the same.
Gestures can be divided into two basic categories by user experience.
Using gestures for navigation and system control will provide Natural User Interface (NUI), which completely removes dependency on any mechanical devices like a keyboard or mouse. The key contributor to NUI is touch-less gesture control which allows manipulating virtual objects in a way similar to physical ones. NUI let users quickly immerse in the ‘new world’ – applications with master control with minimum learning, what is very important for AR/VR applications and ambient intelligence systems. In burgeoning applications like autonomous drone control and in-car infotainment navigation, NUI can greatly increase the usability [3].