# Monocular Depth Estimation for Semi-Transparent Volume Renderings

Published in arXiv, 2022

Recommended citation: Dominik Engel et al.: "Monocular Depth Estimation for Semi-Transparent Volume Renderings" arXiv.

### Abstract

Neural networks have shown great success in extracting geometric information from color images. Especially, monocular depth estimation networks are increasingly reliable in real-world scenes. In this work we investigate the applicability of such monocular depth estimation networks to semi-transparent volume rendered images. As depth is notoriously difficult to define in a volumetric scene without clearly defined surfaces, we consider different depth computations that have emerged in practice, and compare state-of-the-art monocular depth estimation approaches for these different interpretations during an evaluation considering different degrees of opacity in the renderings. Additionally, we investigate how these networks can be extended to further obtain color and opacity information, in order to create a layered representation of the scene based on a single color image. This layered representation consists of spatially separated semi-transparent intervals that composite to the original input rendering. In our experiments we show that adaptions of existing approaches to monocular depth estimation perform well on semi-transparent volume renderings, which has several applications in the area of scientific visualization.

under review

### Code

released upon acceptance

### Citation

@article{engel2022stdepth,
title={Monocular Depth Estimation for Semi-Transparent Volume Renderings},
author={Engel, Dominik and Hartwig, Sebastian and Ropinski, Timo},
journal={arXiv},
year={2022},
doi={https://doi.org/10.48550/arXiv.2206.13282}
}