Show simple item record

dc.contributor.authorHua, Binh-Sonen_US
dc.contributor.authorTruong, Quang-Trungen_US
dc.contributor.authorTran, Minh-Khoien_US
dc.contributor.authorPham, Quang-Hieuen_US
dc.contributor.authorKanezaki, Asakoen_US
dc.contributor.authorLee, Tangen_US
dc.contributor.authorChiang, HungYuehen_US
dc.contributor.authorHsu, Winstonen_US
dc.contributor.authorLi, Boen_US
dc.contributor.authorLu, Yijuanen_US
dc.contributor.authorJohan, Henryen_US
dc.contributor.authorTashiro, Shokien_US
dc.contributor.authorAono, Masakien_US
dc.contributor.authorTran, Minh-Trieten_US
dc.contributor.authorPham, Viet-Khoien_US
dc.contributor.authorNguyen, Hai-Dangen_US
dc.contributor.authorNguyen, Vinh-Tiepen_US
dc.contributor.authorTran, Quang-Thangen_US
dc.contributor.authorPhan, Thuyen V.en_US
dc.contributor.authorTruong, Baoen_US
dc.contributor.authorDo, Minh N.en_US
dc.contributor.authorDuong, Anh-Ducen_US
dc.contributor.authorYu, Lap-Faien_US
dc.contributor.authorNguyen, Duc Thanhen_US
dc.contributor.authorYeung, Sai-Kiten_US
dc.contributor.editorIoannis Pratikakis and Florent Dupont and Maks Ovsjanikoven_US
dc.date.accessioned2017-04-22T17:17:40Z
dc.date.available2017-04-22T17:17:40Z
dc.date.issued2017
dc.identifier.isbn978-3-03868-030-7
dc.identifier.issn1997-0471
dc.identifier.urihttp://dx.doi.org/10.2312/3dor.20171048
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/3dor20171048
dc.description.abstractThe goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN 16] and CAD models from ShapeNet [CFG 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectI.4.8 [Computer Vision]
dc.subjectScene Analysis
dc.subjectObject Recognition
dc.titleRGB-D to CAD Retrieval with ObjectNN Dataseten_US
dc.description.seriesinformationEurographics Workshop on 3D Object Retrieval
dc.description.sectionheadersSHREC Session I
dc.identifier.doi10.2312/3dor.20171048
dc.identifier.pages25-32


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record