Apple has presented a foundation model for zero-shot metric monocular depth estimation.
The model, Depth Pro, synthesises high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics.
Apple claims that the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image.
Extensive experiments analyse specific design choices and demonstrate that Depth Pro outperforms prior work along multiple dimensions.
By creating a model that operates so speedily, Apple has opened the door to creating 3D imagery from a single lens camera in real time. And this, the team notes, could have major implications for robots and other real-time mapping applications, such as those used on autonomous vehicles.