The characteristics of natural environments, as well as the limitations on representations, sensors, and actuators, make navigational mistakes inevitable. This paper looks at how to detect and diagnose mistakes autonomous mobile robots make while navigating through large-scale space using vision. Mistakes are perceptual, cognitive, or motor events that divert one from the intended route. Detection and diagnosis consist of realizing a mistake has occurred, determining what it was, and when it happened. This paper describes an approach that detects mistakes by finding mismatches between observations and expectations. It diagnoses mistakes by examining knowledge from a variety of sources, including a history of observations and actions. It supports these operations by using symbolic visual information to compare expectations with observations augmented by a priori knowledge. This paper describes MUCKLE, the simulation used to test the approach, and presents experimental results that demonstrate its effectiveness.