Sign-Language using OneAPI

Joel John Joseph

Joel John Joseph

Bengaluru, Karnataka

0 0
  • 0 Collaborators

Creating a Sign Language Machine Learning Model for Accurate Recognition and Interpretation of ASL Gestures ...learn more

Project status: Under Development

oneAPI, Artificial Intelligence

Intel Technologies
DevCloud, oneAPI, Intel Python, AI DevCloud / Xeon, Intel Opt ML/DL Framework

Code Samples [1]

Overview / Usage

In this research, a machine learning model will be developed for precise ASL gesture identification and interpretation. The goal is to create a system that can help speech-impaired, deaf, and non-sign language users communicate more effectively with one another.

The project helps the speech-impaired and deaf communities communicate more effectively by enhancing their ability to express themselves. With an accuracy of at least 90%, the model classifies 28 different types of ASL gestures in real-time using a convolutional neural network (CNN) architecture.

This research has the potential to be applied in a variety of production settings, including real-time interpreting services, educational materials for learning sign language, and assistive technology devices for the deaf and speech-impaired communities. This initiative helps to eliminate communication barriers and advance accessibility in technology by fostering a more inclusive society.

Methodology / Approach

The different Methodology are:

  1. Data collection: We gathered a large dataset of American Sign Language (ASL) gestures and their corresponding English words to use for training and testing our model.
  2. Data preprocessing: We preprocessed the dataset by resizing and cropping the images to a uniform size, converting them to grayscale, and normalizing the pixel values to improve model performance.
  3. Model training: We trained a convolutional neural network (CNN) architecture to classify the ASL gestures. We used transfer learning by fine-tuning a pre-trained CNN model to improve accuracy and reduce training time.
  4. Deployment: We deployed the model in a real-time application to capture and recognize the ASL gestures in real-time. The predicted words are combined into a sentence and displayed on the screen.

We used several frameworks and techniques in our development, including:

  1. TensorFlow: We used the TensorFlow framework to train and fine-tune our CNN model.
  2. Keras: We used the Keras API to build and train our CNN model.
  3. One Api: To provide better processing time and faster results.
  4. Natural language processing: We used NLP techniques such as part-of-speech tagging and grammar rules to structure the predicted words into a grammatically correct sentence.
  5. Flask: We used the Flask framework to build a web application that enables users to interact with the model in real-time.

Technologies Used

TensorFlow ,Keras, AI, ML, CNN, Devcloud, oneDAL

Repository

https://github.com/JoelJJoseph/Sign-Language_oneApi

Comments (0)