Vision-based interest point extraction evaluation in multiple environments

Download
Author
McKeehan, Zachary D.
Date
2008-09Advisor
Kolsch, Mathias
Squire, Kevin
Metadata
Show full item recordAbstract
Computer-based vision is becoming a primary sensor mechanism in many facets of real world 2-D and 3-D applications, including autonomous robotics, augmented reality, object recognition, motion tracking, and biometrics. Vision's ability to utilize non-volatile features to serve as permanent landmarks in motion tracking provides a superior basis for applications such as initial self-localization, future re-localization, and 3-D scene reconstruction and mapping. Furthermore, the increased reliance of the United States armed forces on the standoff warfighting capabilities of unmanned and autonomous vehicles (UXV) in, on, and above the sea, necessitates better overall navigation capabilities of these platforms. Towards this end, we draw upon existing technology to measure and compare current visual interest point extractor performance. We utilize an inventory of extractors to define and track interest points through physical transformations captured in images of various scene classifications. We then perform a preliminary determination of the best-suited extraction descriptor for each visual scene given multi-frame interest point persistence with maximum viewpoint invariance. Our research contributes an important cornerstone towards the validation of precision, vision-based navigation, thereby increasing UXV performance and strengthening the security of the United States and her allies worldwide.