MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis

Abstract
With the rapid development of data-driven techniques, data has played an essential role in various computer vision tasks. Many realistic and synthetic datasets have been proposed to address different problems. However, there are lots of unresolved challenges: (1) the creation of dataset is usually a tedious process with manual annotations, (2) most datasets are only designed for a single specific task, (3) the modification or randomization of the 3D scene is difficult, and (4) the release of commercial 3D data may encounter copyright issue. This paper presents MINERVAS, a Massive INterior EnviRonments VirtuAl Synthesis system, to facilitate the 3D scene modification and the 2D image synthesis for various vision tasks. In particular, we design a programmable pipeline with Domain-Specific Language, allowing users to select scenes from the commercial indoor scene database, synthesize scenes for different tasks with customized rules, and render various types of imagery data, such as color images, geometric structures, semantic labels. Our system eases the difficulty of customizing massive scenes for different tasks and relieves users from manipulating fine-grained scene configurations by providing user-controllable randomness using multilevel samplers. Most importantly, it empowers users to access commercial scene databases with millions of indoor scenes and protects the copyright of core data assets, e.g., 3D CAD models. We demonstrate the validity and flexibility of our system by using our synthesized data to improve the performance on different kinds of computer vision tasks. The project page is at https://coohom.github.io/MINERVAS.
Description

CCS Concepts: Computing methodologies → Graphics systems and interfaces

        
@article{
10.1111:cgf.14657
, journal = {Computer Graphics Forum}, title = {{
MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis
}}, author = {
Ren, Haocheng
 and
Zhang, Hao
 and
Zheng, Jia
 and
Zheng, Jiaxiang
 and
Tang, Rui
 and
Huo, Yuchi
 and
Bao, Hujun
 and
Wang, Rui
}, year = {
2022
}, publisher = {
The Eurographics Association and John Wiley & Sons Ltd.
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.14657
} }
Citation
Collections