Place Recognition for Mobile Robot in Changing Environments

Authors Organisations
Type

Student thesis: Doctoral ThesisDoctor of Philosophy

Original languageEnglish
Awarding Institution
Supervisors/Advisors
Award date2016
Links
Show download statistics
View graph of relations

Abstract

This thesis is concerned with the problem of place recognition for a mobile robot using an omnidirectional camera as its sole sensor modality. The problems we are faced with range from orientation estimation to loop closure detection, in the absence of any prior knowledge of position.
In order to resolve the challenging issues encountered by any appearance-based place recognition system - specifically, perceptual aliasing and variability - we first develop a quadtree-based image comparison method. In contrast to most existing methods, this method does not involve the computationally expensive step of feature or keypoint detection and description, which utilises the spatial structure property of an image to provide robustness against dynamic changes in scenes. Ouralgorithm is experimentally evaluated on one public dataset, and two datasets collected by ourselves in different environments, thereby demonstrating its effectiveness in handling perceptual aliasing and environment variability.
For many tasks in mobile robotics, it is crucial accurately to determine the orientation of the robot, relying on a single vision sensor. For this purpose, we propose an evaluation methodology that focuses on the ability of different image-based algorithms to establish the heading of the robot when capturing two images. Critical analysis of performance is also provided.
In addition, a quadtree-based loop closure detection method is proposed, with the intention of increasing the number of correctly-recognized revisited locations (high recall) at low false positives (high precision). The loop closure detection is performed by pairwise image comparison. The performance of the proposed method is evaluated using our collected dataset, which contains highly aliased images and drastic perceptual changes. The experimental results show that our method can achieve a high recall at 100% precision, and outperform other related algorithms in term of closeness to ground truth.