Multi-Sensor Integration for Indoor 3D Reconstruction

Date
2014-05-02
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Outdoor maps and navigation information delivered by modern services and technologies like Google Maps and Garmin navigators have revolutionized the lifestyle of many people. Motivated by the desire for similar navigation systems for indoor usage from consumers, advertisers, emergency rescuers/responders, etc., many indoor environments such as shopping malls, museums, casinos, airports, transit stations, offices, and schools need to be mapped. Typically, the environment is first reconstructed by capturing many point clouds from various stations and defining their spatial relationships. Currently, there is a lack of an accurate, rigorous, and speedy method for relating point clouds in indoor, urban, satellite-denied environments. This thesis presents a novel and automatic way for fusing calibrated point clouds obtained using a terrestrial laser scanner and the Microsoft Kinect by integrating them with a low-cost inertial measurement unit. The developed system, titled the Scannect, is the first joint-static-kinematic indoor 3D mapper. Manmade instruments are susceptible to systematic distortions. These uncompensated errors can cause inconsistencies between the map and reality; for example a scale factor error can lead firefighters to the wrong rooms during rescue missions. For terrestrial laser scanners, marker-based user self-calibration has shown effectiveness, but has yet to gain popularity because it can be cumbersome to affix hundreds of targets inside a large room. Previous attempts to expedite this process involved removing the dependency on artificial signalized targets. However, the commonalities and differences of markerless approaches with respect to the marker-based method were not established. This research demonstrated with simulations and real data that using planar features can yield similar calibration results as using markers and much of the knowledge about marker-based calibration is transferable to the plane-based calibration method. For the Microsoft Kinect, there is a limited amount of research dedicated to calibrating the system. Most methods cannot handle the different onboard optical sensors simultaneously. Therefore, a novel and accurate total system calibration algorithm based on the bundle adjustment that can account for all inter-sensor correlations was developed. It is the first and only algorithm that functionally models the misalignments between the infrared camera and projector in addition to providing full variance-covariance information for all calibration parameters. Scan-matching and Kalman filtering are often used together by robots to fuse data from different sensors and perform Simultaneous Localisation and Mapping. However, most research that utilized the Kinect adopted the loosely-coupled paradigm and ignored the time synchronization error between the depth and colour information. A new measurement model that facilitates a tightly-coupled Kalman filter with an arbitrary number of Kinects was proposed. The depth and colour information were treated independently while being aided by inertial measurements, which yielded advantages over existing approaches. For the depth data, a new tightly-coupled iterative closest point algorithm that minimizes the reprojection errors of a triangulation-based 3D camera within an implicit iterative extended Kalman filter framework was developed. When texture variation becomes available, the RGB images automatically update the state vector by using a novel tightly-coupled 5-point visual odometry algorithm without state augmentation.
Description
Keywords
Computer Science, Engineering, Robotics
Citation
Chow, J. (2014). Multi-Sensor Integration for Indoor 3D Reconstruction (Doctoral thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca. doi:10.11575/PRISM/27038