Gesture Recognition System

Gesture Recognition System

Siddhant Agarwal

Siddhant Agarwal

New Delhi, Delhi

Gesture recognition enables humans to communicate with the machine (HMI) and interact naturally without any mechanical devices.

Artificial Intelligence

  • 0 Collaborators

  • 1 Followers

    Follow

Description

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

  1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
  2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
  3. Find out contours and draw convex hull.
  4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

Gallery

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium 82098901 4b46 4947 ab5d ad2e75b9c11c

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium 330f73c5 7dc6 43e1 9e06 24e878c9ce6a

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium a0794c06 241c 4dfc b12a fc1912094d77

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium 9e4d9af0 4e41 4432 825e 0e179c474d96

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

Medium 614581 476703535682399 1726324801 o

Siddhant A. created project Gesture Recognition System

Medium c81b8a83 e125 4f7b b093 2b294cde0abe

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

  1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
  2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
  3. Find out contours and draw convex hull.
  4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

No users to show at the moment.