Computer Vision Aiding Smartphone Sensors for Indoor Location Applications

atmire.migration.oldid2888
dc.contributor.advisorEl-Sheimy, Naser
dc.contributor.authorKazemipur, Bashir
dc.date.accessioned2015-01-23T22:06:27Z
dc.date.available2015-02-23T08:00:39Z
dc.date.issued2015-01-23
dc.date.submitted2015en
dc.description.abstractModern mobile phones are powerful processing devices with a host of onboard technologies of interest to navigation system designers. In the absence of Global Navigation Satellite System (GNSS) information, the accelerometers and gyroscopes within a smartphone can be used to provide a relative navigation solution. However, these micro-electro-mechanical systems (MEMS) based sensors suffer from the effects of various errors which cause the inertial-only solution to deteriorate rapidly. As such, there is a need to constrain the inertial positioning solution when long-term navigation is needed. GNSS positions and velocities, and WiFi positions when available, are the most important forms of updates available for the inertial solution. However, updates from these two sources depend on external signals and infrastructure that may not always be available. One attractive source of updates is through the use of a vision sensor. This work describes the development of a vision-based module that determines the device heading misalignment and context based on a sequence of images captured from the device camera. The vision aiding module checks for static periods and calculates the device heading misalignment when in motion. Context classification is assessed for five common use cases: (1) fidgeting the phone while standing still (“fidgeting” context), (2) phone on ear on one floor (“single floor calling” context), (3) phone on ear on stairs (“stairs calling” context), (4) phone in hand on a single floor (“single floor texting” context), and (5) phone in hand on stairs (“stairs texting” context). The module was tested using real-time video and inertial data collected using a Samsung Galaxy S3 smartphone running the Android 4.0 operating system. The results show successful detection of the aforementioned use cases and accurate device angles. Integration of the vision aiding module with a pedestrian dead reckoning (PDR) system shows improvements to the position solution.en_US
dc.identifier.citationKazemipur, B. (2015). Computer Vision Aiding Smartphone Sensors for Indoor Location Applications (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca. doi:10.11575/PRISM/25403en_US
dc.identifier.doihttp://dx.doi.org/10.11575/PRISM/25403
dc.identifier.urihttp://hdl.handle.net/11023/2020
dc.language.isoeng
dc.publisher.facultyGraduate Studies
dc.publisher.institutionUniversity of Calgaryen
dc.publisher.placeCalgaryen
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.
dc.subjectEngineering
dc.subject.classificationComputer Visionen_US
dc.subject.classificationIndoor Navigationen_US
dc.subject.classificationSensor Fusionen_US
dc.titleComputer Vision Aiding Smartphone Sensors for Indoor Location Applications
dc.typemaster thesis
thesis.degree.disciplineGeomatics Engineering
thesis.degree.grantorUniversity of Calgary
thesis.degree.nameMaster of Science (MSc)
ucalgary.item.requestcopytrue
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ucalgary_2015_kazemipur_bashir.pdf
Size:
4.33 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.65 KB
Format:
Item-specific license agreed upon to submission
Description: