Henzler, PhilippRasche, VolkerRopinski, TimoRitschel, TobiasGutierrez, Diego and Sheffer, Alla2018-04-142018-04-1420181467-8659http://dx.doi.org/10.1111/cgf.13369https://diglib.eg.org:443/handle/10.1111/cgf13369As many different 3D volumes could produce the same 2D x-ray image, inverting this process is challenging. We show that recent deep learning-based convolutional neural networks can solve this task. As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which is then fused in a second step with the input x-ray into a high-resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer-simulated 2D x-ray images of 3D volumes scanned from 175 mammalian species. Future applications of our approach include stereoscopic rendering of legacy x-ray images, re-rendering of x-rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x-rays.Deep learningVolume renderingInverse renderingConvolutional neural networksTomographySingle-image Tomography: 3D Volumes from 2D Cranial X-Rays10.1111/cgf.13369377-388