The aerial robot is computing an optimized, full coverage path given a prior model of the structure. However, as the prior models are expected to be inaccurate, outdated or generally not capturing important details (such as human-readable data), an augmented reality interface is employed to enable the human operator to command new viewpoints through appropriate head motions. To assist the operator to identify the subsets of the structure that need further inspection, we stream both the stereo views as well as the real-time dense reconstruction of the environment taking place on-board the robot.
The augmented reality interface is based on an Android phone and a cardboard-like virtual reality device. Head motion is used to command new viewpoints in the xyz-space while the robot ensures that the initial viewpoints are also visited.
The aerial robot operates autonomously in a GPS-denied environment without the support of motion capture. A few printed markers were added for safety as the test took place in a dark environment (and we just built this robot!).
More to come soon!
Tool for the initial inspection path using a prior model of the wattmeter structure: