American Sign Language (ASL) Classification Using Computer Vision & Intel® Realsense

Adam Milton-Barker

Adam Milton-Barker

Bangor, Wales

7 0
  • 0 Collaborators

Translating American Sign Language using Intel® Realsense, UP Squared & Intel® Movidius. ...learn more

Project status: Under Development

RealSense™, Internet of Things, Artificial Intelligence

Groups
Internet of Things, DeepLearning, Artificial Intelligence Europe, Movidius™ Neural Compute Group

Intel Technologies
AI DevCloud / Xeon, Intel Opt ML/DL Framework, Movidius NCS

Overview / Usage

American Sign Language (ASL) Classification Using Computer Vision & Intel Realsense allows you to train a neural network with labelled American Sign Language images to be able to translate American Sign Language into words.

Methodology / Approach

American Sign Language (ASL) Classification Using Computer Vision & Intel Realsense uses the power of the Intel® Movidius (Neural Compute Stick) and uses a custom trained Inception V3 model to carry out image classification locally. IoT communication is powered by the iotJumpWay to communicate with connected devices and applications.

The code and tutorial are currently being redeveloped and homed on a dedicated Github repository, links will be available soon. With this project I have been lucky enough to test a new dataset created by Intel® that is not yet released, full official details will be provided on the dataset soon including when the dataset will be available, watch this space :)

Technologies Used

Software Requirements:

  • Intel® NCSDK
  • Tensorflow 1.4.0
  • IoT JumpWay Python MQTT Client

Hardware Requirements:

  • 1 x Intel® Realsense or compatible webcam
  • 1 x Intel® Movidius
  • 1 x Linux Device / Intel® AI DevCloud for training & converting the trained model to a Movidius friendly model.
  • 1 x Raspberry Pi 3 / UP Squared for the classifier / server.
Comments (0)