Neuro-evolution and deep-learning for autonomous vision based road-following

Authors Organisations
Type

Student thesis: Doctoral ThesisDoctor of Philosophy

Original languageEnglish
Awarding Institution
Supervisors/Advisors
Thesis sponsors
  • HPC Wales
Award date2018
Links
Show download statistics
View graph of relations

Abstract

The ability to robustly detect and follow roads, irrespective of their type and prevailing environmental conditions is a crucial component of the control system of autonomous vehicles. Navigation in scenarios where the boundaries between road and non-road areas are poorly delineated, extreme lighting conditions such as shadows and reflections, sudden changes in road composition are some instances of challenging conditions which makes road-following a non-trivial problem from a computer vision perspective. A review of the state of the art suggests that despite being the focus of a vast body of works, there is a need to explore alternate approaches for developing a control framework that can be applied across the variety of operational scenarios. In this thesis two solutions based on different implementations of artificial neural networks are proposed for robust and generalised autonomous driving. The first solution presented in Chapter 3, is an integrated sensorimotor controller directly controlling the designated mobile platform, that is trained using the principles of evolutionary robotics (neuro-evolution) on a set of virtual roads. The controller is able to adapt to new road environments by dynamically adjusting its own perception of the environment to extract desired regularities. Analysis into the behaviour of this network suggests that despite a very low resolution visual input, it is able to extract discernible cues from the environment through complex patterns of oscillations which dynamically alter the composition of colour channels composing its final input vector. As opposed to this active vision system, the solution in Chapter 4 is based on the use of convolutional networks that predict the position and width of the road in the input image plane. These passive vision networks are trained to learn features global to road-environments on a dataset that encapsulates varied operational conditions. Both solutions are extensively evaluated with a number of different colour models in virtual and real-world tests involving a Pioneer 3-AT mobile robot. Results from driving trials on a set of common road segments suggest the former dynamic active vision controller performs better than a control system that makes navigation decisions based on road position predictions from the deep convolutional network models. The two controllers based on neural models with differing architectures and training schemes were shown to be able to generalise to noisy real world road environments, much different from those used during training. Besides a providing comparative evaluation of the two approaches, we also discuss future directions of research through which the principles of neuro-evolution (evolutionary robotics) and deep-learning can be
integrated into a single control structure.