Event-based camera and ultra-wide-band based multi-UAV localization

Aug 2015 - Jan 2017 Linhao Jin

Current sensor technology focus on the improvement of Vision-based sensors such as Stereo Vision camera and Time-of-flight camera, and kinectic sensors such as Microsoft Kinect and Intel Realsense. Recent Event-based dynamic vision sensors (DVSs) provide a novel and efficient way of transferring the variation of frequency of light into a grayscale image. It encode light and its temporal variations by asynchronously reacting and transmitting the change of light sources from individual pixel. These parallel and data-driven way of outputting image can achieve a higher streaming speed and cost low computational energy. Ultra-Wideband (UWB) is also a sensor which is adapted in this project which take the angle of orientation, the signal strength, and time delay information of each node to determine the position. UWB can provide high accuracy in centimeters ranging as well as less-power consuming, economical communication system.


To achieve high localization speed and low computational cost, in this project, we have proposed a novel way of fusing Event-based camera, UWB and IMU on a mobile robot for the precise localization. All sensor noises are assumed to be gaussian white noise and data are processed by UKF. The observation are taken from the sensor readings of camera and UWB, while the motion model, more specifically the acceleration and quaternion, comes from the readings of IMU.


Due to the special configuration of event-based camera, three LED lights of frequency 500Hz, 250Hz and 165Hz are used in connected with an Arduino Uno Board for vision based localization. Four UWBs are ultilized. Vicon system is also used as ground truth. The fusion of sensor data and localization is achieved using unscented Kalman Filter, and the result is shown in the figure above.

Codes available at: https://github.com/wang-chen/svcam
Indoor 3D reconstruction and mapping for high ceiling spray painting robot
Stereo Vision-based Gantry Detection for Autonomous Driving
Building 3D Texture Model from Video and CT for Endoscopic Sinus Surgery