Teaching Drones How to Navigate in Complex Environments

Teaching Drones How to Navigate in Complex Environments

Troi Williams

Troi Williams

Tampa, Florida

In this project, we teach drones how to navigate in complex environments using machine learning and computer vision. [Ongoing research].

Artificial Intelligence, Robotics

  • 0 Collaborators

  • 0 Followers

    Follow

Description

Small, unmanned aerial systems (sUASs) require on-board sensors to autonomously navigate and avoid obstacles within complex environments such as wooded areas, which are locations that contain many trees and other vegetation. However, sUASs, which can be one meter or less in diameter, have limited carrying capacity, which restricts the number of sensors they can carry. Also, since outdoor navigation may require long endurance flights and sUASs have limited battery life, using power-hungry sensors such as laser range finders is not feasible. Thus, sUASs are usually limited to an on-board RGB camera. We propose a neural network-based, obstacle avoidance system that uses images from a robot's on-board camera to estimate the distance the robot is from a frontal obstacle such as a tree. For this work, we collected virtual and real data to train a neural network and developed preliminary results, and we are currently exploring future work and directions.

[This is ongoing research. More results and information will come in the future.]

Gallery

Medium fullsizeoutput 4ae

Troi W. created project Teaching Drones How to Navigate in Complex Environments

Medium 3c5fe188 c147 45ea ba5c 576691a170f0

Teaching Drones How to Navigate in Complex Environments

Small, unmanned aerial systems (sUASs) require on-board sensors to autonomously navigate and avoid obstacles within complex environments such as wooded areas, which are locations that contain many trees and other vegetation. However, sUASs, which can be one meter or less in diameter, have limited carrying capacity, which restricts the number of sensors they can carry. Also, since outdoor navigation may require long endurance flights and sUASs have limited battery life, using power-hungry sensors such as laser range finders is not feasible. Thus, sUASs are usually limited to an on-board RGB camera. We propose a neural network-based, obstacle avoidance system that uses images from a robot's on-board camera to estimate the distance the robot is from a frontal obstacle such as a tree. For this work, we collected virtual and real data to train a neural network and developed preliminary results, and we are currently exploring future work and directions.

[This is ongoing research. More results and information will come in the future.]

No users to show at the moment.

No users to show at the moment.