Activity Feed

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium 82098901 4b46 4947 ab5d ad2e75b9c11c

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium 330f73c5 7dc6 43e1 9e06 24e878c9ce6a

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium a0794c06 241c 4dfc b12a fc1912094d77

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Gesture Recognition System

Medium 9e4d9af0 4e41 4432 825e 0e179c474d96

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in
segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
3. Find out contours and draw convex hull.
4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

Medium 614581 476703535682399 1726324801 o

Siddhant A. created project Gesture Recognition System

Medium c81b8a83 e125 4f7b b093 2b294cde0abe

Gesture Recognition System

This project introduces a hand gesture recognition system which uses only hand gestures to communicate with the computer system. This algorithm divided into three parts: preprocessing, segmentation and feature extraction. In feature extraction, we will find moments of the gesture image, centroid of the image and Euclidean distance to find finger count. We make use of contours, convex hull and convexity defects to find the hand gesture. In this algorithm, hand segmentation is used to extract the hand image from the background. There are several methods for segmentation. The important step in segmentation is transformation and thresholding. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. In this algorithm, the BGR image taken by a camera is considered as input to the algorithm. The BGR image is transformed into gray scale image. The gray scale image is blurred to get the exact boundary. The blurred image is threshold to the particular value.

The project is a Linux-based application for live motion gesture recognition using webcam input in python. This project is a combination of live motion detection and gesture identification. This application uses the webcam to detect gesture made by the user and perform basic operations accordingly. The user has to perform a particular gesture. The webcam captures this and identifies the gesture, recognizes it (against a set of known gestures) and performs the action corresponding to it. This application can be made to run in the background while the user runs other programs and applications. This is very useful for a hands-free approach. Follow steps were followed to achieve the desired result:

  1. Capture frames and display them: This is followed by importing libraries, creating camera object, reading the frames and displaying the frames.
  2. Extract Region of Interest (ROI): or background subtraction- This is achieved by RGB to gray conversion and then finding out the threshold.
  3. Find out contours and draw convex hull.
  4. Find out convexity defects, plot and display number of defects. Depending on the number of defects performing the required function.

A simple gesture could pause or play the movie or increase the volume even while sitting afar from the computer screen. One could easily scroll through an eBook or a presentation even while having lunch.

Medium 614581 476703535682399 1726324801 o

Siddhant A. created project Drum Trees

Medium a3610300 84af 484a b4c8 5d06d8f50779

Drum Trees

350 technologists, artists, designers and thinkers from all around India gathered in Gandhinagar, Gujarat, India for the MIT Media Lab 5th Design Innovation Workshop from January 17th to 23rd, 2015 by The MIT Media Lab India Initiative. Those awesome 7 days gave an enthralling experience to each one of us. The week long program inspired and focused on building things that matter. As part of the workshop, following tracks were covered - Civic Innovation, Synthetic Biology, Welspun Smart Textiles, Synchronous Tools, Immersive Storytelling, Sensors across Scales, Samsung Lifelong Learning, Creating engaging playful experiences, Enabling Toys and Networked Playscapes.

I was selected from a pool of thousands of applicants who applied to the workshop in the Networked Playscapes track. The track focused on making use of design thinking techniques and learn the basics of physical computing while imagining the playgrounds of the future. It also reflected on how the internet and ubiquitous computing is affecting and shaping the way we relate and communicate over distance paying close attention to scale, reach, directionality, intentionality and the senses targeted: How far can my message get and why? And yes, use of Arduinos and sensors to hack and prototype. Xbee radios were a plus in this track!

In today's chaotic world, there's a real dearth of interaction among space, nature and people. ‘Nature creates music of its own'. But people are too busy to admire the beauty of such auditory interactions between nature and music. Such auditory interactions invites a feeling of togetherness among the people.

'Drum Tree' , an engaging interaction between nature & people using music, exciting playfulness & recreation in your stressed life. Areas of the tree trunk are mapped to a unique musical beat. When parts of the tree trunk are tapped, the music of the drum can be heard by people located at different locations, invoking playful creativity and a memorable opportunity to connect with nature.

Collaborators: Abhishek(BBDNITM), Anil(PESIT), Joseph(IIT Kanpur), Koushik(IISC), Siddhant(UPES), Tanu(Samsung) Mentors: Alisha Panjwani and Edwina Portocarrero (MIT Media Lab)

Medium 614581 476703535682399 1726324801 o

Siddhant A. updated project Intel Student Developer Program and Intel Student Ambassador Program status

Medium 614581 476703535682399 1726324801 o

Siddhant Agarwal

Sun, 08/06/2017 09:04

Agenda: - Intel® Software Student Developer Program for AI - Intel Student Ambassador Program - Introduction to Intel® Nervana™ AI Academy for Students - Intel® Nervana™ AI Academy Student Workshop Overview - AI -The Next Wave & The Next Wave of Innovation by Intel - Intel Deep Dive Technical Training

Created awareness and trained 1500+ Student developers from 15+ Universities of India (Tier-1/Tier-2/Tier-3) so far on Machine Learning, Deep Learning and Artificial Intelligence. Encouraged them to apply for Intel Student Ambassador and leverage various hardware and software proposition that Intel has to offer as part of this program.

We received participation of students from various academic departments - Computer Science, Information Technology, Electrical & Electronics, Electronics & Communication, Mechanical with above 40% women participation overall.

Medium 614581 476703535682399 1726324801 o

Siddhant A. updated status

Medium 614581 476703535682399 1726324801 o

Siddhant Agarwal

Agenda: - Intel® Software Student Developer Program for AI - Intel Student Ambassador Program - Introduction to Intel® Nervana™ AI Academy for Students - Intel® Nervana™ AI Academy Student Workshop Overview - AI -The Next Wave & The Next Wave of Innovation by Intel - Intel Deep Dive Technical Training

Created awareness and trained 1500+ Student developers from 15+ Universities of India (Tier-1/Tier-2/Tier-3) so far on Machine Learning, Deep Learning and Artificial Intelligence. Encouraged them to apply for Intel Student Ambassador and leverage various hardware and software proposition that Intel has to offer as part of this program.

We received participation of students from various academic departments - Computer Science, Information Technology, Electrical & Electronics, Electronics & Communication, Mechanical with above 40% women participation overall.

614581 476703535682399 1726324801 o

Siddhant A. added photos to project Intel Student Developer Program and Intel Student Ambassador Program

Medium 43c3c665 6cd6 491f 8834 8a8c39543b9c

Intel Student Developer Program and Intel Student Ambassador Program

With the Intel® Software Student Developer Program, our goal is to drive awareness and adoption of AI at the academic level by showcasing and highlighting the expertise, inspiration and innovation of students, and Intel Student Ambassadors, and their successes with Intel Architecture.

As part of Intel® Software Student Developer Program, we also announced the Intel Student Ambassador Program for AI, an exciting new program for university students to engage with Intel around their work in Machine Learning, Deep Learning and Artificial Intelligence. Selected Intel Student Ambassadors act as key liaisons to Intel are provided technical support, resources, and marketing to advance their own work through Intel software, tools, and hardware and support to host activities that promote their research role for six months to a year.

As part of Intel AI focused Student Readiness Campaign Team, on behalf of Intel I evangelize the program to drive awareness and adoption of ML and AI at the academic level in various universities across India.

See More

About

Featured Projects

See All

Thumb a3610300 84af 484a b4c8 5d06d8f50779
  • Collaborators 0
  • Followers 1

Follow

Drum Trees

Thumb 614581 476703535682399 1726324801 o Siddhant Agarwal

Created: 08/12/2017

'Drum Trees' is an engaging interaction between nature & people using music, exciting playfulness...

Golden bfff1346 49d7 4ee1 b07c ffde7c78d65b
  • Members 312

DeepLearning

Practical Deep Learning and Machine learning Projects

No users to show at the moment.

No projects to show at the moment.

Medium real sense
Featured
  • Followers 1724

Intel RealSense™

Natural interaction, immersive, collaboration, gaming and learning, 3D scanning. Integrate tracki...

Medium android
Featured
  • Followers 1746

Android

Intel is inside more and more Android devices, and we have tools and resources to make your app d...

Medium big data
Featured
  • Followers 1809

Modern Code

Drive faster breakthroughs through faster code: Get more results on your hardware today and carry...

Medium networking
Featured
  • Followers 1657

Networking

Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are transforming the...

Medium achievement unlocked logo
Featured
  • Followers 1581

Game Development

Upgrade your skills as a game developer, share your game projects, and connect with other develop...

Medium vr large 575x441
Featured
  • Followers 1292

Virtual Reality

VR, AR, mixed reality...you'll find projects based on all these new platforms here. Share your own!

Default user avatar 57012e2942
  • Projects 0
  • Followers 0

Jude Ben

Jude Ben is a software developer and currently finding passion in artificial Intelligence and Deep learning. Jude has implemented various deep learning algorithms like Neural Network, Convolutional Neural Network , Recurrent neural Network and Generative adversarial network, and use various tools like tensorflow , keras , Jupyter notebook, numpy and https://www.floydhub.com/. He blogs at https://medium.com/africa-ai He is the lead deep learning research engineer at his startup( ShareQube) - Where He carried out research for precision agriculture with AI. He Love sharing his skills with others and facilitates direct interaction at events with developers in south eastern Nigeria. He believes in giving back to his community, from whom he has learned so much .

Uyo, Nigeria

Bigger profiloweintel
  • Projects 0
  • Followers 0

Michał Węgierski

Student, 20 years old Artes Liberales (Liberal Arts), University of Warsaw (2016-...) Cognitive Science, University of Warsaw (2017-...)

Warszawa, Polska

Bigger icons8 user
  • Projects 0
  • Followers 0

Rushikesh Kinhalkar

Pune International Airport Area, Lohgaon, Pune, Maharashtra, India

Bigger asd
  • Projects 0
  • Followers 0

mohammed Saket

My name is mohammed saket. I am pursuing b.tech from jecrc university, jaipur .

Ajmer, Rajasthan, India

Bigger cloudconstable 20exhibit 20setup 20smaller
  • Projects 0
  • Followers 1

Michael Pickering

Mr. Pickering is leading the CloudConstable team competing in the IBM Watson AI XPRIZE. Our Grand Challenge is to protect families from cyber threats. As part of this project the team is building a 'robot head' device to serve as the social hub for families and will include an AI assistant displayed as an animated avatar. The 'brain' for the device will be an Intel Compute Card and an Intel RealSense camera will give it the ability to see!

Canada

Bigger 17904006 1471630372867199 2856780117152239503 n
  • Projects 2
  • Followers 1

Anubhav Singh

Full Stack Developer | Machine Learning researcher | Occasional Writer

Allahabad, Uttar Pradesh, India

Bigger screenshot 2017 07 05 23 03 26 0508190912
  • Projects 0
  • Followers 0

Nivas R

Guntur, Andhra Pradesh, India

Bigger image
  • Projects 1
  • Followers 1

Douglas Castilho

Ph.D. Candidate in Computer Science and Computational Mathematics at University of São Paulo – Brazil, working with Deep Learning for Time Series Prediction, Stock Market and Social Network

R. Lázaro de Lima - Jardim Esperanca, Poços de Caldas - MG, 37713-132, Brazil

See More