AutoNavigator - A Self-Driving Car Prototype through Machine Learning

Saurabh Sukhatankar

Saurabh Sukhatankar

Bengaluru, Karnataka

1 0
  • 0 Collaborators

It's a Self-Driving Car Prototype through Machine Learning. The remote controlled car is trained to navigate on its own on the provided path towards the destination. ...learn more

Project status: Published/In Market

Internet of Things, Artificial Intelligence

Groups
Artificial Intelligence India, DeepLearning

Intel Technologies
Intel GPA

Code Samples [1]

Overview / Usage

Automation in the existing technology has matured to a point at which exciting applications have become possible. The software industry has developed a variety of intelligent automation products and services from the home automation system to Tesla or Google self-driving cars which have alleviated human’s life altogether. This project is about demonstrating the self-learning capability and embedding intelligence in a Remote controlled (RC) car to exhibit a prototype of self-driving car using combination of advanced techniques like Machine Learning with micro-controllers, android and networking technologies. The model gives significance result training accuracy and cross validation accuracy is about 97.32% and 95.54% respectively on a self-generated dataset.

In this, we propose a model for an end-end autonomous driving that takes in raw camera inputs and outputs driving actions resulting in autonomous movement of car on any path i. e. in the changing environment. The model is able to handle partially observable scenarios. Moreover, we propose to integrate the recent advances of deep reinforcement learning in this model and embedding sensor inputs in order to extract only relevant information from the received sensors data, thereby making it suitable for real-time embedded systems.

  • The basic functioning of the system is as follows –

The remote controlled car is used for autonomous navigation along any path on which it’s placed to navigate. A smartphone camera holder that is used to hold the image capturing device, e.g. smartphone, is placed on the top of the RC car. The function of the image capturing device is to send the images of the path on which the car navigates to the server. The image capturing device uses android application e.g. Netcam to send the images to the Machine Learning model on the server. The model predicts the decision to move left, right or forward or stop on the path. This decision triggers the remote control circuitry to move the car in predicted direction. Thus the car auto-navigates!

Methodology / Approach

The design of the working of the system can be divided into two phases –

  • Dataset generation (self-generated dataset) and training of machine learning model
  • Real time testing and navigation of the RC car

  • Dataset generation (self-generated dataset) and training of machine learning model -

The remote controlled car used in the system was trained using the arrow control keys. The RC car was placed on the navigation path consisting of turns and straight paths. For a specific instance of time, the image of the road path beneath the car was captured by the image capturing device placed on top of the car. For the same instance and position of RC car, the decision for appropriate navigation was sent through arrow keys of the keyboard i.e. up to move forward, down to stop, left to take left turn and right arrow key for car to turn right through JavaScript. When the arrow keys were pressed, a string pertaining to movement i.e. “up”, “down”, “left” and “right” was sent to the microcontroller (Arduino board) through wireless module. Further, the microcontroller sent high/low signal to the appropriate ports(one out of four ports received high signal and other three ports received low signal) based on the decision(up/down/left/right) being provided to it through arrow keys of the keyboard. The output received on four different ports of arduino board passed through four opto-couplers was used to control the toggle switches of the RC car. This led to movement of the car. The image captured (path) and the decision made for the corresponding movement of the RC car was stored in the server leading to self-generated dataset.

The self-generated dataset consisted of the images in a specific format. The decision that suited the best at that instance of time was appended at the end of the image in the format like imageUrl_instructionType (instructionType: 0 if car was made to turn left, 1 if car was made to turn right, 2 for forward movement and 3 for stop) e.g. a sample image from the dataset was recorded in the form imageUrl_0 it indicated that for a given image moving towards left was the corresponding decision given through arrow keys, similarly imageUrl_1 depicts that moving towards the right was a corresponding decision given to RC car through arrow keys. Thus, the dataset consisted of sample images and the appropriate decision taken at that corresponding time as the label for the image.

The self-generated dataset was pre-processed. The images were blurred by applying appropriate noise removal filters for removal of background noise (For the sake of the prototype, path used during training phase had black surface with white strips depicting the round boundaries)

Further, a machine learning model was designed and programmed in server to classify images in the dataset into four different classes. Multilayer Perceptron architecture formed the basis of the model. The pre-processed dataset was divided into 60% for training, 20% cross validation and the remaining 20% for testing purposes. The hyper-parameter tuned trained model resulted in performance with training accuracy of 97.32% and cross-validation accuracy of 95.54%. The generated four class classification multilayer perceptron model was pickled for real-time predictions.

  • Real time testing and navigation of RC car –

For real-time autonomous navigation of the RC car, a single image of the road path was captured by the image capturing device located above it (camera of smartphone). This single generated image was sent to the server by making use of REST API via the android application viz. NetCam. The server loaded the pre-trained model. The obtained image was fed to the model for prediction after pre-processing it. The output of the model was the corresponding class i.e. action which needs to be taken e.g. if the output of the pre-trained model was 0, turning left was the action predicted for that single image (particular position of the car). Similarly if the output was 1,2 and 3, turning right, moving forward and stopping the RC car respectively were the actions predicted by the pre-trained multilayer perceptron model.

The output was sent to micro-controller (Arduino) through wireless module i.e. “up” if the predicted class was 0. Similarly, “down”, “left” or “right” if the predicted class was 1, 2 or 3 respectively. Similar to training phase, the output of microcontroller was obtained on four different ports i.e. microcontroller generated high signal on only one of the four output ports and low signal on the remaining three output ports. The output of arduino through optocouplers was to toggle the switches (left/right/up/down) of the remote control of the RC car. After appropriate input to the remote control of the RC car through arduino the car auto-navigated i.e. turned left/right/moved forward or stopped for that instance of time. The image for next instance was captured and sent to server for further decision of navigation. This way the car navigated itself.

Technologies Used

  • Technology Stack -
  1. ** **Hardware - Laptop, Remote Controlled Car, Remote Control, Smartphone, Camera holder, Arduino board, Connecting wires, PCB and Optocouplers
  2. Scripting libraries/frameworks - JavaScript, Flask, Embedded C(Arduino)
  3. Software Apps/IDEs - NetCam(Android app for live streaming of path), Arduino, PyCharm
  4. Machine Learning and Image Processing Libraries - Multilayer Perceptron Model(MLP), OpenCV

Repository

https://github.com/SukhatankarSV/AutoNavigator

Comments (0)