Patil, Akshay GadiPatil, Supriya GadiLi, ManyiFisher, MatthewSavva, ManolisZhang, HaoAlliez, PierreWimmer, Michael2024-03-232024-03-2320241467-8659https://doi.org/10.1111/cgf.14927https://diglib.eg.org/handle/10.1111/cgf14927This report surveys advances in deep learning‐based modelling techniques that address four different 3D indoor scene analysis tasks, as well as synthesis of 3D indoor scenes. We describe different kinds of representations for indoor scenes, various indoor scene datasets available for research in the aforementioned areas, and discuss notable works employing machine learning models for such scene modelling tasks based on these representations. Specifically, we focus on the and of 3D indoor scenes. With respect to analysis, we focus on four basic scene understanding tasks – 3D object detection, 3D scene segmentation, 3D scene reconstruction and 3D scene similarity. And for synthesis, we mainly discuss neural scene synthesis works, though also highlighting model‐driven methods that allow for human‐centric, progressive scene synthesis. We identify the challenges involved in modelling scenes for these tasks and the kind of machinery that needs to be developed to adapt to the data representation, and the task setting in general. For each of these tasks, we provide a comprehensive summary of the state‐of‐the‐art works across different axes such as the choice of data representation, backbone, evaluation metric, input, output and so on, providing an organized review of the literature. Towards the end, we discuss some interesting research directions that have the potential to make a direct impact on the way users interact and engage with these virtual scene models, making them an integral part of the metaverse.methods and applicationsmethods and applications – computer gamesmodellinggeometric modellingvirtual environmentsAdvances in Data‐Driven Analysis and Synthesis of 3D Indoor Scenes10.1111/cgf.1492732 pages