Pham, Quang-HieuTran, Minh-KhoiLi, WenhuiXiang, ShuZhou, HeyuNie, WeizhiLiu, AnanSu, YutingTran, Minh-TrietBui, Ngoc-MinhDo, Trong-LeNinh, Tu V.Le, Tu-KhiemDao, Anh-VuNguyen, Vinh-TiepDo, Minh N.Duong, Anh-DucHua, Binh-SonYu, Lap-FaiNguyen, Duc ThanhYeung, Sai-KitTelea, Alex and Theoharis, Theoharis and Veltkamp, Remco2018-04-142018-04-142018978-3-03868-053-61997-0471https://doi.org/10.2312/3dor.20181052https://diglib.eg.org:443/handle/10.2312/3dor20181052Recent advances in consumer-grade depth sensors have enable the collection of massive real-world 3D objects. Together with the rise of deep learning, it brings great potential for large-scale 3D object retrieval. In this challenge, we aim to study and evaluate the performance of 3D object retrieval algorithms with RGB-D data. To support the study, we expanded the previous ObjectNN dataset [HTT 17] to include RGB-D objects from both SceneNN [HPN 16] and ScanNet [DCS 17], with the CAD models from ShapeNetSem [CFG 15]. Evaluation results show that while the RGB-D to CAD retrieval problem is indeed challenging due to incomplete RGB-D reconstructions, it can be addressed to a certain extent using deep learning techniques trained on multi-view 2D images or 3D point clouds. The best method in this track has a 82% retrieval accuracy.RGB-D Object-to-CAD Retrieval10.2312/3dor.2018105245-52