Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector. However, polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. Long cast shadows from oblique illumination create large no-go areas and regions of uncertain hazards. The lack of atmospheric scattering gives a chiaroscuro quality of extreme dynamic range. Regolith heiligenschein and solar glare play roles in reducing visual contrast at specific azimuthal angles. No surface rover yet has attempted to operate in these situations and much remains unknown about other optical phenomena or even detrimental combinations of benign effects at the rover scale.
Resource Prospector is currently undertaking an effort to characterize imaging performance in polar conditions for stereo vision and human operation. As appearance is a composition of scene reflectance, geometry and illumination, the key idea is to modulate each of these components in a principled and calibrated way while measuring the results with respect to ground truth. Our novel approach to planetary sensor characterization is two-fold: physical experimentation in an optically relevant analog environment and physics-based rendering of procedurally generated scenes of the Lunar surface. Using both testing environments enables diversity of simulation, test efficiency, and cross-validation of results.
For physical testing, we are developing a Lunar Appearance Lab at NASA Ames with the help of SSERVI. Our aim is to reproduce appearance phenomena encountered at the Lunar poles in a controlled laboratory setting. A constellation of near-field, low-angle lighting with relevant color temperature, flux, and apparent solar diameter produces stark shadows and high dynamic range. Both JSC-1A and NU-LHT regolith simulants are being utilized for experimentation. Terrain geometry consisting of rock and crater features pulled from size-frequency distributions in a Monte Carlo manner are realized by hand construction. Additionally, terrain test cases and stereo image pairs are ground truthed with state-of-art survey LIDAR scanning which exhibits superior performance.
Stereo vision is characterized by comparing 3D point clouds from block matching against registered ground truth – whether physical or virtual – using density, accuracy and hazard detection metrics. Preliminary results of this work show that the approach is effective in not only identifying imaging failure cases, but also in providing a big picture statistical view of performance for risk management. With continued work in this area, it may be possible to predict optimal image acquisition parameters even before flying the mission. The extensive catalog of imagery and 3D point clouds acquired thus far is intended to be released in the future as the first planetary robotics stereo dataset.