Monocular Image Based 3D Model Retrieval
Loading...
Date
2019
Authors
Li, Wenhui
Liu, Anan
Nie, Weizhi
Song, Dan
Li, Yuqian
Wang, Weijie
Xiang, Shu
Zhou, Heyu
Bui, Ngoc-Minh
Cen, Yunchi
Chen, Zenian
Chung-Nguyen, Huy-Hoang
Diep, Gia-Han
Do, Trong-Le
Doubrovski, Eugeni L.
Duong, Anh-Duc
Geraedts, Jo M. P.
Guo, Haobin
Hoang, Trung-Hieu
Li, Yichen
Liu, Xing
Liu, Zishun
Luu, Duc-Tuan
Ma, Yunsheng
Nguyen, Vinh-Tiep
Nie, Jie
Ren, Tongwei
Tran, Mai-Khiem
Tran-Nguyen, Son-Thanh
Tran, Minh-Triet
Vu-Le, The-Anh
Wang, Charlie C. L.
Wang, Shijie
Wu, Gangshan
Yang, Caifei
Yuan, Meng
Zhai, Hao
Zhang, Ao
Zhang, Fan
Zhao, Sicheng
Liu, Anan
Nie, Weizhi
Song, Dan
Li, Yuqian
Wang, Weijie
Xiang, Shu
Zhou, Heyu
Bui, Ngoc-Minh
Cen, Yunchi
Chen, Zenian
Chung-Nguyen, Huy-Hoang
Diep, Gia-Han
Do, Trong-Le
Doubrovski, Eugeni L.
Duong, Anh-Duc
Geraedts, Jo M. P.
Guo, Haobin
Hoang, Trung-Hieu
Li, Yichen
Liu, Xing
Liu, Zishun
Luu, Duc-Tuan
Ma, Yunsheng
Nguyen, Vinh-Tiep
Nie, Jie
Ren, Tongwei
Tran, Mai-Khiem
Tran-Nguyen, Son-Thanh
Tran, Minh-Triet
Vu-Le, The-Anh
Wang, Charlie C. L.
Wang, Shijie
Wu, Gangshan
Yang, Caifei
Yuan, Meng
Zhai, Hao
Zhang, Ao
Zhang, Fan
Zhao, Sicheng
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Monocular image based 3D object retrieval is a novel and challenging research topic in the field of 3D object retrieval. Given a RGB image captured in real world, it aims to search for relevant 3D objects from a dataset. To advance this promising research, we organize this SHREC track and build the first monocular image based 3D object retrieval benchmark by collecting 2D images from ImageNet and 3D objects from popular 3D datasets such as NTU, PSB, ModelNet40 and ShapeNet. The benchmark contains classified 21,000 2D images and 7,690 3D objects of 21 categories. This track attracted 9 groups from 4 countries and the submission of 20 runs. To have a comprehensive comparison, 7 commonly-used retrieval performance metrics have been used to evaluate their retrieval performance. The evaluation results show that the supervised cross domain learning get the superior retrieval performance (Best NN is 97.4 %) by bridging the domain gap with label information. However, there is still a big challenge for unsupervised cross domain learning (Best NN is 61.2%), which is more practical for the real application. Although we provided both view images and OBJ file for each 3D model, all the participants use the view images to represent the 3D model. One of the interesting work in the future is directly using the 3D information and 2D RGB information to solve the task of monocular Image based 3D model retrieval.
Description
@inproceedings{10.2312:3dor.20191068,
booktitle = {Eurographics Workshop on 3D Object Retrieval},
editor = {Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco},
title = {{Monocular Image Based 3D Model Retrieval}},
author = {Li, Wenhui and Liu, Anan and Chen, Zenian and Chung-Nguyen, Huy-Hoang and Diep, Gia-Han and Do, Trong-Le and Doubrovski, Eugeni L. and Duong, Anh-Duc and Geraedts, Jo M. P. and Guo, Haobin and Hoang, Trung-Hieu and Li, Yichen and Nie, Weizhi and Liu, Xing and Liu, Zishun and Luu, Duc-Tuan and Ma, Yunsheng and Nguyen, Vinh-Tiep and Nie, Jie and Ren, Tongwei and Tran, Mai-Khiem and Tran-Nguyen, Son-Thanh and Tran, Minh-Triet and Song, Dan and Vu-Le, The-Anh and Wang, Charlie C. L. and Wang, Shijie and Wu, Gangshan and Yang, Caifei and Yuan, Meng and Zhai, Hao and Zhang, Ao and Zhang, Fan and Zhao, Sicheng and Li, Yuqian and Wang, Weijie and Xiang, Shu and Zhou, Heyu and Bui, Ngoc-Minh and Cen, Yunchi},
year = {2019},
publisher = {The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {10.2312/3dor.20191068}
}