Birklein, LukasSchömer, ElmarBrylka, RobertSchwanecke, UlrichSchulze, RalfHansen, ChristianProcter, JamesRenata G. RaidouJönsson, DanielHöllt, Thomas2023-09-192023-09-192023978-3-03868-216-52070-5786https://doi.org/10.2312/vcbm.20231211https://diglib.eg.org:443/handle/10.2312/vcbm20231211In oral and maxillofacial cone beam computed tomography (CBCT), patient motion is frequently observed and, if not accounted for, can severely affect the usability of the acquired images. We propose a highly flexible, data driven motion correction and reconstruction method which combines neural inverse rendering in a CBCT setting with a neural deformation field. We jointly optimize a lightweight coordinate based representation of the 3D volume together with a deformation network. This allows our method to generate high quality results while accurately representing occurring patient movements, such as head movements, separate jaw movements or swallowing. We evaluate our method in synthetic and clinical scenarios and are able to produce artefact-free reconstructions even in the presence of severe motion. While our approach is primarily developed for maxillofacial applications, we do not restrict the deformation field to certain kinds of motion. We demonstrate its flexibility by applying it to other scenarios, such as 4D lung scans or industrial tomography settings, achieving state-of-the art results within minutes with only minimal adjustments.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Reconstruction; Volumetric models; Motion processing; Neural networksComputing methodologiesReconstructionVolumetric modelsMotion processingNeural networksNeural Deformable Cone Beam CT10.2312/vcbm.2023121141-5010 pages