Shape-from-shading (SfS), also known as photoclinometry, is a set of techniques for estimating surface relief based on variation in light intensity recorded in images. It has a long history in the literature. Here we apply SfS to recover high quality terrains from satellite images, particularly on the Moon, especially towards the South Pole, due to recent interest in missions there.
We use multiple input images, realistic camera models, non-Lambertian reflectance, variable albedo, uncertain camera positions and orientations, low angles of illumination, shadows, and occlusions. We show that shape-from-shading is able to recover significantly more detail than obtained from stereo, while eliminating stereo numerical artifacts and improving the terrain accuracy. Our implementation is released as open-source software as part of the NASA Ames Stereo Pipeline.
We assume one or more views of this terrain, known illumination sources and camera positions and orientations. We minimize a cost function that models the error between actual measured image intensity and simulated image intensity based on the current guess for the underlying terrain. The cost function has an additional smoothness term to make it well-posed.
The terrain reflectance is computed either using either the Lambertian or the Lunar-Lambertian model. The variables of optimization are the terrain, albedo at each point, the exposure of each image, and the camera positions and orientations. We discretize this problem using the finite difference method in a rectangular box, with all values being kept fixed at the boundary. Stereo on two of the images or LIDAR is used to get an initial guess for the terrain. The initial albedo is set to be equal to 1 everywhere (hence its scale is absorbed into the image exposure).
We use a multi-resolution approach to alleviate the sensitivity of the problem to the initial terrain guess and to errors in camera positions and orientations. We found the smoothness weight to be dependent on the images and the grid size being used in the discretized problem.
We model black shadows on the Moon based on a per-image shadow threshold. Ray-tracing is used to model occlusion. Google's Ceres solver is employed to minimize the cost function.
We have successfully applied our shape-from-shading algorithm for one and multiple Lunar images, towards the equator and at the poles. In one example, we used multiple images at 85 degree latitude South on the Moon captured by the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC). The images are at 1 meter/pixel ground resolution. As result of using SfS, the mean error between the shape being optimized and the ground truth decreased from 2.69 m to 1.67 m, while the standard deviation decreased from 2.68 m to 1.82 m.
We plan to improve our modeling, run the algorithm on more imagery, and improve its speed.