Multi-Sensor Integration for Indoor 3D Reconstruction

atmire.migration.oldid2049
dc.contributor.advisorLichti, Derek
dc.contributor.advisorTeskey, William
dc.contributor.authorChow, Jacky
dc.date.accessioned2014-05-02T17:38:46Z
dc.date.available2014-06-16T07:00:39Z
dc.date.issued2014-05-02
dc.date.submitted2014en
dc.description.abstractOutdoor maps and navigation information delivered by modern services and technologies like Google Maps and Garmin navigators have revolutionized the lifestyle of many people. Motivated by the desire for similar navigation systems for indoor usage from consumers, advertisers, emergency rescuers/responders, etc., many indoor environments such as shopping malls, museums, casinos, airports, transit stations, offices, and schools need to be mapped. Typically, the environment is first reconstructed by capturing many point clouds from various stations and defining their spatial relationships. Currently, there is a lack of an accurate, rigorous, and speedy method for relating point clouds in indoor, urban, satellite-denied environments. This thesis presents a novel and automatic way for fusing calibrated point clouds obtained using a terrestrial laser scanner and the Microsoft Kinect by integrating them with a low-cost inertial measurement unit. The developed system, titled the Scannect, is the first joint-static-kinematic indoor 3D mapper. Manmade instruments are susceptible to systematic distortions. These uncompensated errors can cause inconsistencies between the map and reality; for example a scale factor error can lead firefighters to the wrong rooms during rescue missions. For terrestrial laser scanners, marker-based user self-calibration has shown effectiveness, but has yet to gain popularity because it can be cumbersome to affix hundreds of targets inside a large room. Previous attempts to expedite this process involved removing the dependency on artificial signalized targets. However, the commonalities and differences of markerless approaches with respect to the marker-based method were not established. This research demonstrated with simulations and real data that using planar features can yield similar calibration results as using markers and much of the knowledge about marker-based calibration is transferable to the plane-based calibration method. For the Microsoft Kinect, there is a limited amount of research dedicated to calibrating the system. Most methods cannot handle the different onboard optical sensors simultaneously. Therefore, a novel and accurate total system calibration algorithm based on the bundle adjustment that can account for all inter-sensor correlations was developed. It is the first and only algorithm that functionally models the misalignments between the infrared camera and projector in addition to providing full variance-covariance information for all calibration parameters. Scan-matching and Kalman filtering are often used together by robots to fuse data from different sensors and perform Simultaneous Localisation and Mapping. However, most research that utilized the Kinect adopted the loosely-coupled paradigm and ignored the time synchronization error between the depth and colour information. A new measurement model that facilitates a tightly-coupled Kalman filter with an arbitrary number of Kinects was proposed. The depth and colour information were treated independently while being aided by inertial measurements, which yielded advantages over existing approaches. For the depth data, a new tightly-coupled iterative closest point algorithm that minimizes the reprojection errors of a triangulation-based 3D camera within an implicit iterative extended Kalman filter framework was developed. When texture variation becomes available, the RGB images automatically update the state vector by using a novel tightly-coupled 5-point visual odometry algorithm without state augmentation.en_US
dc.identifier.citationChow, J. (2014). Multi-Sensor Integration for Indoor 3D Reconstruction (Doctoral thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca. doi:10.11575/PRISM/27038en_US
dc.identifier.doihttp://dx.doi.org/10.11575/PRISM/27038
dc.identifier.urihttp://hdl.handle.net/11023/1484
dc.language.isoeng
dc.publisher.facultyGraduate Studies
dc.publisher.institutionUniversity of Calgaryen
dc.publisher.placeCalgaryen
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.
dc.subjectComputer Science
dc.subjectEngineering
dc.subjectRobotics
dc.subject.classificationLiDARen_US
dc.subject.classificationRGB-Den_US
dc.subject.classificationBundle Adjustment with Self-Calibrationen_US
dc.subject.classificationSLAMen_US
dc.subject.classificationICPen_US
dc.subject.classificationMicrosoft Kinecten_US
dc.subject.classification3D Camerasen_US
dc.subject.classificationTerrestrial Laser Scannersen_US
dc.subject.classificationMEMS IMUen_US
dc.subject.classificationSensor Fusionen_US
dc.subject.classificationPoint Cloud Processingen_US
dc.subject.classificationNavigationen_US
dc.subject.classificationMappingen_US
dc.subject.classificationError Modellingen_US
dc.subject.classificationPhotogrammetryen_US
dc.titleMulti-Sensor Integration for Indoor 3D Reconstruction
dc.typedoctoral thesis
thesis.degree.disciplineGeomatics Engineering
thesis.degree.grantorUniversity of Calgary
thesis.degree.nameDoctor of Philosophy (PhD)
ucalgary.item.requestcopytrue
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ucalgary_2014_chow_jacky.pdf
Size:
4.98 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.65 KB
Format:
Item-specific license agreed upon to submission
Description: