Hua, Binh-SonTruong, Quang-TrungTran, Minh-KhoiPham, Quang-HieuKanezaki, AsakoLee, TangChiang, HungYuehHsu, WinstonLi, BoLu, YijuanJohan, HenryTashiro, ShokiAono, MasakiTran, Minh-TrietPham, Viet-KhoiNguyen, Hai-DangNguyen, Vinh-TiepTran, Quang-ThangPhan, Thuyen V.Truong, BaoDo, Minh N.Duong, Anh-DucYu, Lap-FaiNguyen, Duc ThanhYeung, Sai-KitIoannis Pratikakis and Florent Dupont and Maks Ovsjanikov2017-04-222017-04-222017978-3-03868-030-71997-0471https://doi.org/10.2312/3dor.20171048https://diglib.eg.org:443/handle/10.2312/3dor20171048The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN 16] and CAD models from ShapeNet [CFG 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.I.4.8 [Computer Vision]Scene AnalysisObject RecognitionRGB-D to CAD Retrieval with ObjectNN Dataset10.2312/3dor.2017104825-32