Self Driving Bot using TensorFlow Object Detection

Risab Biswas

Risab Biswas

Jalpaiguri, West Bengal

The Autonomous Self driving Bot that is an exact mimic of a self driving car. It can detect real time obstacles such as Car, Bus, Truck, Person in it's surroundings and take decisions accordingly. The backend comprises of OpenCV and Intel optimised Tensorflow. It's just the first iteration. We are working on the subsequent iterations as well. Link to the Demo:- https://www.linkedin.com/feed/update/urn:li:activity:6399905115632955392/ ...learn more

Project status: Under Development

Robotics, RealSense™, Artificial Intelligence

Intel Technologies
Intel Opt ML/DL Framework

Overview / Usage

The Autonomous Self driving Bot that is an exact mimic of a self driving car. It can detect real time obstacles such as Car, Bus, Truck, Person in it's surroundings and take decisions accordingly. The backend comprises of OpenCV and Intel optimised Tensorflow. It's just the first iteration. We are working on the subsequent iterations as well.

Methodology / Approach

The project aims to build an autonomous robot using Intel’s Up square board as a processing board. Up square board is a mini PC having Ubuntu as the Operating System. An HD USB camera is used to detect the obstacle from the real world to the robot. The USB camera is mounted on the top of the robot which acts as the visual sensing device for the bot. The video feed captured by the camera is taken as the main input for the back-end algorithm.
The robot is capable of not only detecting obstacles around it but also in recognizing the that what object it is. Thus, giving the bot the capability of avoiding the obstacle occurring in its path using an obstacle detection algorithm and move in an obstacle free path. The usb camera module will detect the obstacle in real time basis and using image processing algorithm it will detect the obstacle and feedback to the Up-square board wherein it will change the path of the robot and divert it to obstacle free path.
All the actions taken by the robot is dependent upon the obstacle position and the distance between the bot and the obstacle. No other sensor except a single camera is used for detection, recognition and avoidance of the robot.

The robot is completely autonomous in action without having any human intervention. This project is a kick start to a long journey towards Computer vision, Machine learning and Robotics.

Identification of Need/Usage:-

In 2015, 35,092 people died in car accidents. Someone dies once every 88 million miles driven. That gives you about a 0.011% chance of dying in a car accident in any given year, or 0.88% in your lifetime. Most people maintain about 150 relationships, which means that you will probably lose a friend in a car accident.
2.6 million people are injured in vehicles every year. This is billions of dollars of car repairs - in deductibles alone. If we could cut this by 25%, that’d be better than a typical tax break.
Comparatively, self-driving cars are already precisely that much safer. Some studies show that self-driving cars may get in more accidents at low speeds (4mph or below) because they follow traffic laws more carefully, but the consequences are nearly irrelevant.
The top 4 causes of accidents are:

  1. Distraction
    Self-driving cars are dedicated to driving and can notice more, from all angles, and react more quickly. No amount of text messages or hamburgers will have any effect on the car’s ability to stay focused on the road.
  2. Speeding
    Self-driving cars don’t care about your appointments, frustrations or stop-and-go traffic. They can be set to obey the law, respect road signs, and they will never lose track of how fast they are and should be going.
  3. Drunk Driving
    Some self-driving cars do drink a lot of alcohol, but they’re quickly being replaced with cars who drink responsibly and cars that run only on energy drinks.
  4. Recklessness
    Self-driving cars have no ego and no interests in taking risks.
    Also, they’re incredibly convenient. Humans like being distracted. We have so much to do and stop-and-go traffic is monotonous and frustrating, especially when we have a life to manage. It’s a more enjoyable experience in addition to being safer.
    And last, we can streamline our transportation situation with smarter carpooling, higher speed limits, and the elimination of traffic lights

Conclusion:-

Fully automated Self Drive Vehicles (“faSDVs”) are one of just a few known things that will visibly change the look and feel of this century in simple and profound ways (when they become dominant around mid-century or earlier). Today we tolerate 30,000+ deaths, hundreds of thousands of injuries and tens of billions in property damage each year because the value of individual transport is so high. faSDVs let us keep the value while eliminating 80% to 98% of the costs in death, injury, and damage.
Tens of millions spend hundreds of hours per year commuting. At best unproductive, often stressful. The faSDV turns this time into productive work time or leisure time. This time reallocation has huge societal and personal value. Ditto for those that travel during the work day (like repair and service people).
The faSDV technology will eliminate millions of jobs in trucking, taxi driving and warehouse operations (a warehouse robot is just another faSDV). This will bring real savings and convenience to business and consumers. Including some ways, we don’t even consider today. And huge social unrest when millions of workers have no semi-skilled job to fall back to.
About a third of commercial spaces will be emptied (or more).
Residential area will become more suburban but different as spaces are configured without garages and parking likely prettier and more functional. Life styles will change when the elderly can stay even though they can no longer safely drive. Parents will love sending the kids to after school activities in an faSDV (no more carpool driving).
Businesses will change. As people sleep in their moving faSDV on the way to

Technologies Used

• OpenCV on Intel Optimised Python –

OpenCV is a library of cross platform programming functions aimed at real time Computer Vision. IT was designed for computational efficiency and with a strong focus on real-time applications, video and image processing. Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C.

• Intel Optimised TensorFlow –

TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. It is used for both research and production at Google, often replacing its closed-source predecessor, Disbelief.
TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open source license on November 9, 2015.
TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on February 11, 2017. While the reference implementation runs on single devices, TensorFlow can run on multiple CPUs and GPUs (with optional CUDA and SYCL extensions for general-purpose computing on graphics processing units). TensorFlow is available on 64-bit Linux, macOS, Windows, and mobile computing platforms including Android and iOS.
TensorFlow computations are expressed as stateful dataflow graphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays. These arrays are referred to as "tensors".

• MQTT -
MQTT stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimize network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.

• MRAA for Up square GPIO Configuration –

MRAA (pronounced em-rah) is a low-level library written in the C language. The purpose of MRAA is to abstract the details associated with accessing and manipulating the basic I/O capabilities of a platforms, such as the Intel® Galileo or Intel® Edison boards, into a single, concise API. MRAA serves as a translation layer on top of the Linux General Purpose Input/Outputs (GPIO) facilities. Although Linux provides a fairly rich infrastructure for manipulating GPIOs, and its generic instructions for handling GPIOs are fairly standard, it can be difficult to use.

Collaborators

1 Result

1 Result

Comments (0)