Small, unmanned aerial systems (sUASs) require on-board sensors to autonomously navigate and avoid obstacles within complex environments such as wooded areas, which are locations that contain many trees and other vegetation. However, sUASs, which can be one meter or less in diameter, have limited carrying capacity, which restricts the number of sensors they can carry. Also, since outdoor navigation may require long endurance flights and sUASs have limited battery life, using power-hungry sensors such as laser range finders is not feasible. Thus, sUASs are usually limited to an on-board RGB camera. We propose a neural network-based, obstacle avoidance system that uses images from a robot's on-board camera to estimate the distance the robot is from a frontal obstacle such as a tree. For this work, we collected virtual and real data to train a neural network and developed preliminary results, and we are currently exploring future work and directions.
[This is ongoing research. More results and information will come in the future.]