Deep Neural Network Aiding Visual Odometry for Land Vehicles Navigation

Date
2020-12-09
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Self-driving cars consider vision sensor (monocular/stereo camera) as the primary sensor for driving vehicles and providing rich visual information which can be utilized for obstacle avoidance and scene understanding. This thesis introduces an improved visual odometry algorithm for vehicle navigation by including deep neural network such as YOLOv3 through masking the moving objects within each frame, and excluding these objects in order to aid the RANSAC (RANdom SAmple Consensus) to raise the inliers ratio. Pedestrians and moving vehicles can add outliers and reduce RANSAC performance. In some cases, the RANSAC algorithm can fail, such as when any dataset has a significant number of contaminated points or is non-realistic, such as within the dynamic environment. By integrating a machine learning module, RANSAC was able to rely more on static features than on dynamic features, resulting in lower RANSAC computing cost. Different datasets were used to check the proposed algorithm’s efficiency. The results are promising because they reflect a rise in the elapsed time reduction for primary matching and RANSAC and incrementation in inliers proportion. Through implementing the suggested approach, the final navigation solution for two different datasets presents a significant improvement over the typical visual odometry technique.
Description
Keywords
Deep Neural Networks, Feature Matching, RANSAC, Computer Vision, Monocular Camera, Artificial Intelligence, Enhance Final Navigation Solution, Trajectory, Inertial, IMU, Camera Calibration, YOLOv3, CNNs, ANNs, sensor fusion, Land Vehicles
Citation
Salib, A. M. A. (2020). Deep Neural Network Aiding Visual Odometry for Land Vehicles Navigation (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.