Luo, ShoutongSun, ZhengxingSun, YunhanWang, YiUmetani, NobuyukiWojtan, ChrisVouga, Etienne2022-10-042022-10-0420221467-8659https://doi.org/10.1111/cgf.14662https://diglib.eg.org:443/handle/10.1111/cgf14662Semantic scene completion (SSC) aims to recover the complete geometric structure as well as the semantic segmentation results from partial observations. Previous works could only perform this task at a fixed resolution. To handle this problem, we propose a new method that can generate results at different resolutions without redesigning and retraining. The basic idea is to decouple the direct connection between resolution and network structure. To achieve this, we convert feature volume generated by SSC encoders into a resolution adaptive feature and decode this feature via point. We also design a resolution-adapted point sampling strategy for testing and a category-based point sampling strategy for training to further handle this problem. The encoder of our method can be replaced by existing SSC encoders. We can achieve better results at other resolutions while maintaining the same accuracy as the original resolution results. Code and data are available at https://github.com/lstcutong/ReS-SSC.CCS Concepts: Computing methodologies → Volumetric modelsComputing methodologies → Volumetric modelsResolution-switchable 3D Semantic Scene Completion10.1111/cgf.14662121-13010 pages