Gesture-Based Computing...

Gesture-Based Computing...

                      Future Technology That Will Change the World...

                                  Gesture-Based Computing

           Whenever a movie or television show has wanted to depict something futuristic then chances are they've used the notion of gesture-based technology in one form or another. The kinds of computers that are projected into the middle of a room and manipulated by the user in a manner of a conductor directing an orchestra or technology that's linked to eye movements adjusting to the way that the human body works. This has long been the film making world's idea of the future. However as you're probably aware. This technology actually already exists. If you have a smartphone then you're already using it likewise with gaming consoles stuff like shaking a phone to activate certain functions or playing games on Xbox that uses cameras and sensor to enhance gameplay by tracking movement and gestures. 

The other major industry where gesture based technology is a playing a growing role is in automotive. So far these things have been mostly confined to temperature controls or things like music volume and such but the use of gesture for the actual driving process is right around the corner. The idea of incorporating human gestures into the technology we use is the natural progression that many of our devices and gadgets have long sought. It's how we imagine the future a place where technology helps improve human lives and is seamlessly incorporated into our homes and workplaces becoming a kind of extension of ourselves well. When you witness just how attached some people are to their smartphones it's simple enough to believe that we've already begun the assimilation. 

       Gesture Types

  • Offline Gesture: Those gesture that are processed after the user interaction with the object. An example is gesture to activate menu.
  • Online Gesture:  Direct manipulation gestures. They are used to scale or rotate a tangible Objects. 

Gesture Recognition: Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithmsIt is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Users can use simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviours is also the subject of gesture recognition techniques.

Picture:

Human Computing Interface using computing :

  • Replace mouse and Keyboard 
  • Pointing gesture
  • Navigate in virtual environment 
  • Interact with a 3D World 
  • No Physical contact with computer
  • Communicate at a distance 


Technologies : Two common technology for hand gesture recognition are:

       Glove based method:

  • Using special glove -based device to extract hand posture.
  • Annoying 

        vision based method:

  • 3D hand/arm modeling  
  • Appearance modeling 
Data Gloves:

  • Data gloves provide very accurate measurements of hand - shape
  • But are cumbersome 
  • Expensive 
  • Connected by wires - restricts freedom of movement 
Since initial research in the 1980s, gesture control technology has come a long way. In the last a few years, advanced gesture control commercial products including Microsoft Kinect, Leap Motion and Myo are used in countless areas, including the entertainment, consumer electronics, automotive, healthcare and education sectors. In this paper, our discussion has focused on the education sector. The Microsoft Kinect experiment looked at the potential value of using Kinect in the lecture room to deliver PowerPoint presentations. We tested its feasibility and reliability to control MS PowerPoint slides using voice and gesture control. In the Leap Motion experiment, we evaluated a range of Leap Motion applications and looked at the potential areas of using Leap Motion in the university. The Leap Motion evaluation was focused on the applications in computer controls, education and scientific creative tools. The aim of the Myo experiment was to provide an interactive user experience in observing 3D objects. We used the Myo armband to create a software prototype to control the 3D objects by rotating, zooming, moving and navigating. Based on the experiment results and the technical differences of these three products, a SWOT analysis was carried out to define the best user scenarios for each individual product in the higher educational environment. Microsoft Kinect has more functions (i.e. face, body and voice recognition), making it more suitable in designing educational games involving whole body movements. However, it faces the issue of requiring space to detect body movement. Leap motion can detect the fine finger and hand movements. The recent release of their Orion product opens new doors, connecting gesture-based computing with AR and VR. Myo is user specific. Due to the customization and training needed to use the device, we consider it to be more of a personal device. Gesture control technology shows a great potential in education. Although at this stage, practical usage of the technology in teaching and learning activities are not widely acknowledged. There are two reasons for this. Firstly, education content differs from subject to subject. In order to use the gesture control product in academia, it requires extensive customisation of software, often requiring developers. In most cases, the university does not have such resources to support the work. Secondly, the effectiveness of the gesture control products still need improving. It takes time to calibrate the product, and in order to control the product, users will need to spend time training and practicing. Gesture control products are not yet as intuitive as they claim to be or have the potential to be. In order to adopt gesture control technology in education, we will need specialized application developed for educational content, which will save the cost of individual development. The application should be easy to use and set up, and should provide an accurate gesture control. Furthermore, the university should install gesture control products and provide support for configuration through training for staff and students. Nevertheless, technology is in continual development, thus the adoption of a more advanced version of the same technology could definitely be beneficial for the University of Birmingham in the near future. 

Uses of gesture computing :

  • Sign language recognition
  • For socially assistive robotics
  • Directional indication through pointing 
  • Control through facial gesture 
  • Alternative computer interface 
  • Remote control computing 

Video Information :

Thank You...




Post a Comment

If you have have any doubts Please let me know...

Previous Post Next Post