Trajectory Guidance via Neural Networks
Overview
The effectiveness of mobile robots in realistic deployments is dependent upon their ability to negotiate diverse environments in which obstacles are dynamic, trajectories are occluded, and lighting conditions may vary. Many existing control methods for robot trajectory guidance utilize knowledge-based approaches that rely on structured environments. While effective in simplistic scenarios, these methods are severely limited by their surroundings. Moreover, they do not generalize for real-world use due to the inherent uncertainty in states encountered during operation. Neural networks have been successfully used to build autonomous navigation systems that can detect lane markings and segment the ground plane (e.g., ALVINN). They can be trained to produce control outputs based on noisy images where information is contained in disparate areas and trajectories are discontinuous.
Contributions
- We created a system for adapting a line following robot to noisy, dynamic, and non-binary environments using two novel neural network architectures