Radar+RGBD Camera Early Fusion for 3D object detection
TeckYian Lim
Champaign, Illinois
- 0 Collaborators
Richer FMCW object detection can be obtained from FMCW radars with the help of deep learning. By fusing raw radar signals and camera image data in the latent space of a deep neural network, we can obtain better object detection results than using any of the sensors alone. ...learn more
Project status: Under Development
RealSense™, Artificial Intelligence
Intel Technologies
Movidius NCS,
Other
Overview / Usage
While having rich information, raw radar signals are difficult for humans to interpret. We collect a synchronized stream of RGB-D and raw radar signals to enable easily labelling of data. With proper calibration, we can transfer bounding boxes from the camera frame to the radar frame. This allows us to perform a number of tasks including:
- Camera/radar teacher/student networks
- Camera+radar fusion networks
- Radar detection networks with human-annotated bounding boxes in the camera frame
Methodology / Approach
We collect a dataset featuring a variety of scenes, both indoors and outdoors, with a RealSense depth camera and raw signals from a TI millimeter-wave FMCW radar. Relative positions of the radar and camera are calibrated using a painted corner reflector that can be easily identified localized in the radar and RGB camera.
In order to synchronize the raw radar frames with the RGB-D images from the RealSense camera, we implemented a ROS node for our radar hardware. Finally, we made use of the calibration and synchronization to generate a training set for object detection.
Eventually, we aim to implement real-time inference with the help of an inference accelerator like the Intel NCS compute stick.
Technologies Used
TensorFlow, OpenCV, ROS