Gao, YueNie, WeizhiLiu, AnanSu, YutingDai, QionghaiAn, LeChen, FuhaiCao, LiujuanDong, ShuilongDe, YuGao, ZanHao, JiayunJi, RongrongLi, HaishengLiu, MingxiaPan, LiliQiu, YuWei, LiweiWang, ZhaoWei, HongjiangZhang, YuyaoZhang, JunZhang, YangZheng, YaliA. Ferreira and A. Giachetti and D. Giorgi2016-05-042016-05-042016978-3-03868-004-81997-0471https://doi.org/10.2312/3dor.20161093This paper reports the results of the SHREC'16 track: 3D Object Retrieval with Multimodal Views, whose goal is to evaluate the performance of retrieval algorithms when multimodal views are employed for 3D object representation. In this task, a collection of 605 objects is generated and both the color images and the depth images are provided for each object. 200 objects including 100 3D printing models and 100 3D real objects are selected as the queries while the other 405 objects are selected as the tests and average retrieval performance is measured. The track attracted seven participants and the submission of 9 runs. Comparing to the last year's results, 3D printing models obviously introduce a bad influence. The performance of this year is worse than that of last year. This condition also shows a promising scenario about multimodal view-based 3D retrieval methods, and reveal interesting insights in dealing with multimodal data.I.3.3 [Information Storage and Retrieval]Content Analysis and IndexingAbstracting methods3D Object Retrieval with Multimodal Views10.2312/3dor.2016109399-106