3DOR 17

Permanent URI for this collection

Lyon, France | April, 23 - 24, 2017

Paper Session I
Exploiting the PANORAMA Representation for Convolutional Neural Network Classification and Retrieval
Konstantinos Sfikas, Theoharis Theoharis, and Ioannis Pratikakis
LightNet: A Lightweight 3D Convolutional Neural Network for Real-Time 3D Object Recognition
Shuaifeng Zhi, Yongxiang Liu, Xiang Li, and Yulan Guo
Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks
Alexandre Boulch, Bertrand Le Saux, and Nicolas Audebert
SHREC Session I
RGB-D to CAD Retrieval with ObjectNN Dataset
Binh-Son Hua, Quang-Trung Truong, Minh-Khoi Tran, Quang-Hieu Pham, Asako Kanezaki, Tang Lee, HungYueh Chiang, Winston Hsu, Bo Li, Yijuan Lu, Henry Johan, Shoki Tashiro, Masaki Aono, Minh-Triet Tran, Viet-Khoi Pham, Hai-Dang Nguyen, Vinh-Tiep Nguyen, Quang-Thang Tran, Thuyen V. Phan, Bao Truong, Minh N. Do, Anh-Duc Duong, Lap-Fai Yu, Duc Thanh Nguyen, and Sai-Kit Yeung
3D Hand Gesture Recognition Using a Depth and Skeletal Dataset
Quentin De Smedt, Hazem Wannous, Jean-Philippe Vandeborre, J. Guerry, B. Le Saux, and D. Filliat
Large-Scale 3D Shape Retrieval from ShapeNet Core55
Manolis Savva, Fisher Yu, Hao Su, Asako Kanezaki, Takahiko Furuya, Ryutarou Ohbuchi, Zhichao Zhou, Rui Yu, Song Bai, Xiang Bai, Masaki Aono, Atsushi Tatsuma, S. Thermos, A. Axenopoulos, G. Th. Papadopoulos, P. Daras, Xiao Deng, Zhouhui Lian, Bo Li, Henry Johan, Yijuan Lu, and Sanjeev Mk
Posters
Shape Similarity System driven by Digital Elevation Models for Non-rigid Shape Retrieval
Daniela Craciun, Guillaume Levieux, and Matthieu Montes
Sketch-based 3D Object Retrieval with Skeleton Line Views - Initial Results and Research Problems
Xueqing Zhao, Robert Gregor, Pavlos Mavridis, and Tobias Schreck
GSHOT: a Global Descriptor from SHOT to Reduce Time and Space Requirements
Carlos M. Mateo, Pablo Gil, and Fernando Torres
A Framework Based on Compressed Manifold Modes for Robust Local Spectral Analysis
Sylvain Haas, Atilla Baskurt, Florent Dupont, and Florence Denis
SHREC Session II
Protein Shape Retrieval
Na Song, Daniela Craciun, Charles W. Christoffer, Xusi Han, Daisuke Kihara, Guillaume Levieux, Matthieu Montes, Hong Qin, Pranjal Sahu, Genki Terashi, and Haiguang Liu
Point-Cloud Shape Retrieval of Non-Rigid Toys
F. A. Limberger, R. C. Wilson, M. Aono, N. Audebert, A. Boulch, B. Bustos, A. Giachetti, A. Godil, B. Le Saux, B. Li, Y. Lu, H.-D. Nguyen, V.-T. Nguyen, V.-K. Pham, I. Sipiran, A. Tatsuma, M.-T. Tran, and S. Velasco-Forero
Deformable Shape Retrieval with Missing Parts
E. Rodolà, L. Cosmo, O. Litany, M. M. Bronstein, A. M. Bronstein, N. Audebert, A. Ben Hamza, A. Boulch, U. Castellani, M. N. Do, A-D. Duong, T. Furuya, A. Gasparetto, Y. Hong, J. Kim, B. Le Saux, R. Litman, M. Masoumi, G. Minello, H-D. Nguyen, V-T. Nguyen, R. Ohbuchi, V-K. Pham, T. V. Phan, M. Rezaei, A. Torsello, M-T. Tran, Q-T. Tran, B. Truong, L. Wan, and C. Zou
Retrieval of Surfaces with Similar Relief Patterns
S. Biasotti, E. Moscoso Thompson, M. Aono, A. Ben Hamza, B. Bustos, S. Dong, B. Du, A. Fehri, H. Li, F. A. Limberger, M. Masoumi, M. Rezaei, I. Sipiran, L. Sun, A. Tatsuma, S. Velasco Forero, R. C. Wilson, Y. Wu, J. Zhang, T. Zhao, F. Fornasa, and A. Giachetti
Paper Session II
3D Mesh Unfolding via Semidefinite Programming
Juncheng Liu, Zhouhui Lian, and Jianguo Xiao
Directed Curvature Histograms for Robotic Grasping
Rodrigo Schulz, Pablo Guerrero, and Benjamin Bustos
Semantic Correspondence Across 3D Models for Example-based Modeling
Vincent Léon, Vincent Itier, Nicolas Bonneel, Guillaume Lavoué, and Jean-Philippe Vandeborre
Towards Recognizing of 3D Models Using A Single Image
Hatem A. Rashwan, Sylvie Chambon, Geraldine Morin, Pierre Gurdjos, and Vincent Charvillat

BibTeX (3DOR 17)
@inproceedings{
10.2312:3dor.20171045,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Exploiting the PANORAMA Representation for Convolutional Neural Network Classification and Retrieval}},
author = {
Sfikas, Konstantinos
and
Theoharis, Theoharis
and
Pratikakis, Ioannis
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171045}
}
@inproceedings{
10.2312:3dor.20171046,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
LightNet: A Lightweight 3D Convolutional Neural Network for Real-Time 3D Object Recognition}},
author = {
Zhi, Shuaifeng
and
Liu, Yongxiang
and
Li, Xiang
and
Guo, Yulan
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171046}
}
@inproceedings{
10.2312:3dor.20171048,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
RGB-D to CAD Retrieval with ObjectNN Dataset}},
author = {
Hua, Binh-Son
and
Truong, Quang-Trung
and
Johan, Henry
and
Tashiro, Shoki
and
Aono, Masaki
and
Tran, Minh-Triet
and
Pham, Viet-Khoi
and
Nguyen, Hai-Dang
and
Nguyen, Vinh-Tiep
and
Tran, Quang-Thang
and
Phan, Thuyen V.
and
Truong, Bao
and
Tran, Minh-Khoi
and
Do, Minh N.
and
Duong, Anh-Duc
and
Yu, Lap-Fai
and
Nguyen, Duc Thanh
and
Yeung, Sai-Kit
and
Pham, Quang-Hieu
and
Kanezaki, Asako
and
Lee, Tang
and
Chiang, HungYueh
and
Hsu, Winston
and
Li, Bo
and
Lu, Yijuan
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171048}
}
@inproceedings{
10.2312:3dor.20171047,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks}},
author = {
Boulch, Alexandre
and
Saux, Bertrand Le
and
Audebert, Nicolas
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171047}
}
@inproceedings{
10.2312:3dor.20171050,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Large-Scale 3D Shape Retrieval from ShapeNet Core55}},
author = {
Savva, Manolis
and
Yu, Fisher
and
Aono, Masaki
and
Tatsuma, Atsushi
and
Thermos, S.
and
Axenopoulos, A.
and
Papadopoulos, G. Th.
and
Daras, P.
and
Deng, Xiao
and
Lian, Zhouhui
and
Li, Bo
and
Johan, Henry
and
Su, Hao
and
Lu, Yijuan
and
Mk, Sanjeev
and
Kanezaki, Asako
and
Furuya, Takahiko
and
Ohbuchi, Ryutarou
and
Zhou, Zhichao
and
Yu, Rui
and
Bai, Song
and
Bai, Xiang
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171050}
}
@inproceedings{
10.2312:3dor.20171049,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
3D Hand Gesture Recognition Using a Depth and Skeletal Dataset}},
author = {
Smedt, Quentin De
and
Wannous, Hazem
and
Vandeborre, Jean-Philippe
and
Guerry, J.
and
Saux, B. Le
and
Filliat, D.
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171049}
}
@inproceedings{
10.2312:3dor.20171051,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Shape Similarity System driven by Digital Elevation Models for Non-rigid Shape Retrieval}},
author = {
Craciun, Daniela
and
Levieux, Guillaume
and
Montes, Matthieu
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171051}
}
@inproceedings{
10.2312:3dor.20171053,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
GSHOT: a Global Descriptor from SHOT to Reduce Time and Space Requirements}},
author = {
Mateo, Carlos M.
and
Gil, Pablo
and
Torres, Fernando
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171053}
}
@inproceedings{
10.2312:3dor.20171052,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Sketch-based 3D Object Retrieval with Skeleton Line Views - Initial Results and Research Problems}},
author = {
Zhao, Xueqing
and
Gregor, Robert
and
Mavridis, Pavlos
and
Schreck, Tobias
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171052}
}
@inproceedings{
10.2312:3dor.20171055,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Protein Shape Retrieval}},
author = {
Song, Na
and
Craciun, Daniela
and
Liu, Haiguang
and
Christoffer, Charles W.
and
Han, Xusi
and
Kihara, Daisuke
and
Levieux, Guillaume
and
Montes, Matthieu
and
Qin, Hong
and
Sahu, Pranjal
and
Terashi, Genki
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171055}
}
@inproceedings{
10.2312:3dor.20171054,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
A Framework Based on Compressed Manifold Modes for Robust Local Spectral Analysis}},
author = {
Haas, Sylvain
and
Baskurt, Atilla
and
Dupont, Florent
and
Denis, Florence
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171054}
}
@inproceedings{
10.2312:3dor.20171056,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Point-Cloud Shape Retrieval of Non-Rigid Toys}},
author = {
Limberger, F. A.
and
Wilson, R. C.
and
Lu, Y.
and
Nguyen, H.-D.
and
Nguyen, V.-T.
and
Pham, V.-K.
and
Sipiran, I.
and
Tatsuma, A.
and
Tran, M.-T.
and
Velasco-Forero, S.
and
Aono, M.
and
Audebert, N.
and
Boulch, A.
and
Bustos, B.
and
Giachetti, A.
and
Godil, A.
and
Saux, B. Le
and
Li, B.
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171056}
}
@inproceedings{
10.2312:3dor.20171057,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Deformable Shape Retrieval with Missing Parts}},
author = {
Rodolà, E.
and
Cosmo, L.
and
Duong, A.-D.
and
Furuya, T.
and
Gasparetto, A.
and
Hong, Y.
and
Kim, J.
and
Saux, B. Le
and
Litman, R.
and
Masoumi, M.
and
Minello, G.
and
Nguyen, H.-D.
and
Litany, O.
and
Nguyen, V.-T.
and
Ohbuchi, R.
and
Pham, V.-K.
and
Phan, T. V.
and
Rezaei, M.
and
Torsello, A.
and
Tran, M.-T.
and
Tran, Q.-T.
and
Truong, B.
and
Wan, L.
and
Bronstein, M. M.
and
Zou, C.
and
Bronstein, A. M.
and
Audebert, N.
and
Hamza, A. Ben
and
Boulch, A.
and
Castellani, U.
and
Do, M. N.
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171057}
}
@inproceedings{
10.2312:3dor.20171058,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Retrieval of Surfaces with Similar Relief Patterns}},
author = {
Biasotti, S.
and
Thompson, E. Moscoso
and
Masoumi, M.
and
Rezaei, M.
and
Sipiran, I.
and
Sun, L.
and
Tatsuma, A.
and
Forero, S. Velasco
and
Wilson, R. C.
and
Wu, Y.
and
Zhang, J.
and
Zhao, T.
and
Aono, M.
and
Fornasa, F.
and
Giachetti, A.
and
Hamza, A. Ben
and
Bustos, B.
and
Dong, S.
and
Du, B.
and
Fehri, A.
and
Li, H.
and
Limberger, F. A.
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171058}
}
@inproceedings{
10.2312:3dor.20171059,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
3D Mesh Unfolding via Semidefinite Programming}},
author = {
Liu, Juncheng
and
Lian, Zhouhui
and
Xiao, Jianguo
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171059}
}
@inproceedings{
10.2312:3dor.20171061,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Semantic Correspondence Across 3D Models for Example-based Modeling}},
author = {
Léon, Vincent
and
Itier, Vincent
and
Bonneel, Nicolas
and
Lavoué, Guillaume
and
Vandeborre, Jean-Philippe
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171061}
}
@inproceedings{
10.2312:3dor.20171062,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Towards Recognizing of 3D Models Using A Single Image}},
author = {
Rashwan, Hatem A.
and
Chambon, Sylvie
and
Morin, Geraldine
and
Gurdjos, Pierre
and
Charvillat, Vincent
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171062}
}
@inproceedings{
10.2312:3dor.20171060,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
Directed Curvature Histograms for Robotic Grasping}},
author = {
Schulz, Rodrigo
and
Guerrero, Pablo
and
Bustos, Benjamin
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-030-7},
DOI = {
10.2312/3dor.20171060}
}

Browse

Recent Submissions

Now showing 1 - 19 of 19
  • Item
    Exploiting the PANORAMA Representation for Convolutional Neural Network Classification and Retrieval
    (The Eurographics Association, 2017) Sfikas, Konstantinos; Theoharis, Theoharis; Pratikakis, Ioannis; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    A novel 3D model classification and retrieval method, based on the PANORAMA representation and Convolutional Neural Networks, is presented. Initially, the 3D models are pose normalized using the SYMPAN method and consecutively the PANORAMA representation is extracted and used to train a convolutional neural network. The training is based on an augmented view of the extracted panoramic representation views. The proposed method is tested in terms of classification and retrieval accuracy on standard large scale datasets.
  • Item
    3DOR 2017: Frontmatter
    (Eurographics Association, 2017) Pratikakis, Ioannis; Dupont, Florent; Ovsjanikov, Maks;
  • Item
    LightNet: A Lightweight 3D Convolutional Neural Network for Real-Time 3D Object Recognition
    (The Eurographics Association, 2017) Zhi, Shuaifeng; Liu, Yongxiang; Li, Xiang; Guo, Yulan; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    With the rapid growth of 3D data, accurate and efficient 3D object recognition becomes a major problem. Machine learning methods have achieved the state-of-the-art performance in the area, especially for deep convolutional neural networks. However, existing network models have high computational cost and are unsuitable for real-time 3D object recognition applications. In this paper, we propose LightNet, a lightweight 3D convolutional neural network for real-time 3D object recognition. It achieves comparable accuracy to the state-of-the-art methods with a single model and extremely low computational cost. Experiments have been conducted on the ModelNet and Sydney Urban Objects datasets. It is shown that our model improves the VoxNet model by relative 17.4% and 23.1% on the ModelNet10 and ModelNet40 benchmarks with less than 67% of training parameters. It is also demonstrated that the model can be applied in real-time scenarios.
  • Item
    RGB-D to CAD Retrieval with ObjectNN Dataset
    (The Eurographics Association, 2017) Hua, Binh-Son; Truong, Quang-Trung; Tran, Minh-Khoi; Pham, Quang-Hieu; Kanezaki, Asako; Lee, Tang; Chiang, HungYueh; Hsu, Winston; Li, Bo; Lu, Yijuan; Johan, Henry; Tashiro, Shoki; Aono, Masaki; Tran, Minh-Triet; Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen, Vinh-Tiep; Tran, Quang-Thang; Phan, Thuyen V.; Truong, Bao; Do, Minh N.; Duong, Anh-Duc; Yu, Lap-Fai; Nguyen, Duc Thanh; Yeung, Sai-Kit; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN 16] and CAD models from ShapeNet [CFG 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.
  • Item
    Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks
    (The Eurographics Association, 2017) Boulch, Alexandre; Saux, Bertrand Le; Audebert, Nicolas; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.
  • Item
    Large-Scale 3D Shape Retrieval from ShapeNet Core55
    (The Eurographics Association, 2017) Savva, Manolis; Yu, Fisher; Su, Hao; Kanezaki, Asako; Furuya, Takahiko; Ohbuchi, Ryutarou; Zhou, Zhichao; Yu, Rui; Bai, Song; Bai, Xiang; Aono, Masaki; Tatsuma, Atsushi; Thermos, S.; Axenopoulos, A.; Papadopoulos, G. Th.; Daras, P.; Deng, Xiao; Lian, Zhouhui; Li, Bo; Johan, Henry; Lu, Yijuan; Mk, Sanjeev; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    With the advent of commodity 3D capturing devices and better 3D modeling tools, 3D shape content is becoming increasingly prevalent. Therefore, the need for shape retrieval algorithms to handle large-scale shape repositories is more and more important. This track provides a benchmark to evaluate large-scale 3D shape retrieval based on the ShapeNet dataset. It is a continuation of the SHREC 2016 large-scale shape retrieval challenge with a goal of measuring progress with recent developments in deep learning methods for shape retrieval. We use ShapeNet Core55, which provides more than 50 thousands models over 55 common categories in total for training and evaluating several algorithms. Eight participating teams have submitted a variety of retrieval methods which were evaluated on several standard information retrieval performance metrics. The approaches vary in terms of the 3D representation, using multi-view projections, point sets, volumetric grids, or traditional 3D shape descriptors. Overall performance on the shape retrieval task has improved significantly compared to the iteration of this competition in SHREC 2016. We release all data, results, and evaluation code for the benefit of the community and to catalyze future research into large-scale 3D shape retrieval (website: https://www.shapenet.org/shrec17).
  • Item
    3D Hand Gesture Recognition Using a Depth and Skeletal Dataset
    (The Eurographics Association, 2017) Smedt, Quentin De; Wannous, Hazem; Vandeborre, Jean-Philippe; Guerry, J.; Saux, B. Le; Filliat, D.; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Hand gesture recognition is recently becoming one of the most attractive field of research in pattern recognition. The objective of this track is to evaluate the performance of recent recognition approaches using a challenging hand gesture dataset containing 14 gestures, performed by 28 participants executing the same gesture with two different numbers of fingers. Two research groups have participated to this track, the accuracy of their recognition algorithms have been evaluated and compared to three other state-of-the-art approaches.
  • Item
    Shape Similarity System driven by Digital Elevation Models for Non-rigid Shape Retrieval
    (The Eurographics Association, 2017) Craciun, Daniela; Levieux, Guillaume; Montes, Matthieu; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Shape similarity computation is the main functionality for shape matching and shape retrieval systems. Existing shape similarity frameworks proceed by parameterizing shapes through the use of global and/or local representations computed in the 3D or 2D space. Up to now, global methods have demonstrated their rapidity, while local approaches offer slower, but more accurate solutions. This paper presents a shape similarity system driven by a global descriptor encoded as a Digital Elevation Model (DEM) associated to the input mesh. The DEM descriptor is obtained through the jointly use of a mesh flattening technique and a 2D panoramic projection. Experimental results on the public dataset TOSCA [BBK08] and a comparison with state-of-the-art methods illustrate the effectiveness of the proposed method in terms of accuracy and efficiency.
  • Item
    GSHOT: a Global Descriptor from SHOT to Reduce Time and Space Requirements
    (The Eurographics Association, 2017) Mateo, Carlos M.; Gil, Pablo; Torres, Fernando; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    This paper presents a new 3D global feature descriptor for object recognition using shape representation on organized point clouds. Object recognition applications usually require significant speed and memory. The proposed descriptor requires 57 times less memory and it is also up to 3 times faster than the local feature descriptor in which it is based. Experimental results indicate that this new 3D global descriptor obtains better matching scores in comparison with known state-of-the-art 3D feature descriptors on two standard benchmark dataset.
  • Item
    Sketch-based 3D Object Retrieval with Skeleton Line Views - Initial Results and Research Problems
    (The Eurographics Association, 2017) Zhao, Xueqing; Gregor, Robert; Mavridis, Pavlos; Schreck, Tobias; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Hand-drawn sketches are a convenient way to define 3D object retrieval queries. Numerous methods have been proposed for sketch-based 3D object retrieval. Such methods employ a non-photo-realistic rendering step to create sketch-like views from 3D objects for comparison with the sketch queries. An implicit assumption here often is that the sketch query resembles a perspective view of the 3D shape. However, based on personal inclination or the type of object, users often tend to draw skeleton views instead of a perspective one. In those cases, a retrieval relying on perspective views is not the best choice, as features extracted from skeleton-based sketches and perspective can be expected to diverge vastly. In this paper, we report on our ongoing work to implement sketch-based 3D object retrieval for skeleton query sketches. Furthermore, we provide an initial benchmark data set consisting of skeleton sketches for a selection of generic object classes. Then, we design a sketch-based retrieval processing pipeline involving a sketch rendering step using Laplacian contraction. Additional experimental results indicate that skeleton sketches can be automatically distinguished from perspective sketches, and that the proposed method works for selected object classes. We also identify object classes for which the rendering of skeleton views is difficult, motivating further research.
  • Item
    Protein Shape Retrieval
    (The Eurographics Association, 2017) Song, Na; Craciun, Daniela; Christoffer, Charles W.; Han, Xusi; Kihara, Daisuke; Levieux, Guillaume; Montes, Matthieu; Qin, Hong; Sahu, Pranjal; Terashi, Genki; Liu, Haiguang; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    The large number of protein structures deposited in the protein database provide an opportunity to examine the structure relations using computational algorithms, which can be used to classify the structures based on shape similarity. In this paper, we report the result of the SHREC 2017 track on shape retrievals from protein database. The goal of this track is to test the performance of the algorithms proposed by participants for the retrieval of bioshape (proteins). The test set is composed of 5,854 abstracted shapes from actual protein structures after removing model redundancy. Ten query shapes were selected from a set of representative molecules that have important biological functions. Six methods from four teams were evaluated and the performance is summarized in this report, in which both the retrieval accuracy and computational speed were compared. The biological relevance of the shape retrieval approaches is discussed. We also discussed the future perspectives of shape retrieval for biological molecular models.
  • Item
    A Framework Based on Compressed Manifold Modes for Robust Local Spectral Analysis
    (The Eurographics Association, 2017) Haas, Sylvain; Baskurt, Atilla; Dupont, Florent; Denis, Florence; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Compressed Manifold Modes (CMM) were recently introduced as a solution to one of the drawbacks of spectral analysis on triangular meshes. The eigenfunctions of the Laplace-Beltrami operator on a mesh depend on the whole shape which makes them sensitive to local aspects. CMM are solutions of an extended problem that have a compact rather than global support and are thus suitable for a wider range of applications. In order to use CMM in real applications, an extensive test has been performed to better understand the limits of their computation (convergence and speed) according to the compactness parameter, the mesh resolution and the number of requested modes. The contribution of this paper is to propose a robust choice of parameters, the automated computation of an adequate number of modes (or eigenfunctions), stability with mutltiresolution and isometric meshes, and an example application with high potential for shape indexation.
  • Item
    Point-Cloud Shape Retrieval of Non-Rigid Toys
    (The Eurographics Association, 2017) Limberger, F. A.; Wilson, R. C.; Aono, M.; Audebert, N.; Boulch, A.; Bustos, B.; Giachetti, A.; Godil, A.; Saux, B. Le; Li, B.; Lu, Y.; Nguyen, H.-D.; Nguyen, V.-T.; Pham, V.-K.; Sipiran, I.; Tatsuma, A.; Tran, M.-T.; Velasco-Forero, S.; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    In this paper, we present the results of the SHREC'17 Track: Point-Cloud Shape Retrieval of Non-Rigid Toys. The aim of this track is to create a fair benchmark to evaluate the performance of methods on the non-rigid point-cloud shape retrieval problem. The database used in this task contains 100 3D point-cloud models which are classified into 10 different categories. All point clouds were generated by scanning each one of the models in their final poses using a 3D scanner, i.e., all models have been articulated before scanned. The retrieval performance is evaluated using seven commonly-used statistics (PR-plot, NN, FT, ST, E-measure, DCG, mAP). In total, there are 8 groups and 31 submissions taking part of this contest. The evaluation results shown by this work suggest that researchers are in the right way towards shape descriptors which can capture the main characteristics of 3D models, however, more tests still need to be made, since this is the first time we compare non-rigid signatures for point-cloud shape retrieval.
  • Item
    Deformable Shape Retrieval with Missing Parts
    (The Eurographics Association, 2017) Rodolà, E.; Cosmo, L.; Litany, O.; Bronstein, M. M.; Bronstein, A. M.; Audebert, N.; Hamza, A. Ben; Boulch, A.; Castellani, U.; Do, M. N.; Duong, A.-D.; Furuya, T.; Gasparetto, A.; Hong, Y.; Kim, J.; Saux, B. Le; Litman, R.; Masoumi, M.; Minello, G.; Nguyen, H.-D.; Nguyen, V.-T.; Ohbuchi, R.; Pham, V.-K.; Phan, T. V.; Rezaei, M.; Torsello, A.; Tran, M.-T.; Tran, Q.-T.; Truong, B.; Wan, L.; Zou, C.; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Partial similarity problems arise in numerous applications that involve real data acquisition by 3D sensors, inevitably leading to missing parts due to occlusions and partial views. In this setting, the shapes to be retrieved may undergo a variety of transformations simultaneously, such as non-rigid deformations (changes in pose), topological noise, and missing parts - a combination of nuisance factors that renders the retrieval process extremely challenging. With this benchmark, we aim to evaluate the state of the art in deformable shape retrieval under such kind of transformations. The benchmark is organized in two sub-challenges exemplifying different data modalities (3D vs. 2.5D). A total of 15 retrieval algorithms were evaluated in the contest; this paper presents the details of the dataset, and shows thorough comparisons among all competing methods.
  • Item
    Retrieval of Surfaces with Similar Relief Patterns
    (The Eurographics Association, 2017) Biasotti, S.; Thompson, E. Moscoso; Aono, M.; Hamza, A. Ben; Bustos, B.; Dong, S.; Du, B.; Fehri, A.; Li, H.; Limberger, F. A.; Masoumi, M.; Rezaei, M.; Sipiran, I.; Sun, L.; Tatsuma, A.; Forero, S. Velasco; Wilson, R. C.; Wu, Y.; Zhang, J.; Zhao, T.; Fornasa, F.; Giachetti, A.; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    This paper presents the results of the SHREC'17 contest on retrieval of surfaces with similar relief patterns. The proposed task was created in order to verify the possibility of retrieving surface patches with a relief pattern similar to an example from a database of small surface elements. This task, related to many real world applications, requires an effective characterization of local "texture" information not depending on patch size and bending. Retrieval performances of the proposed methods reveal that the problem is not quite easy to solve and, even if some of the proposed methods demonstrate promising results, further research is surely needed to find effective relief pattern characterization techniques for practical applications.
  • Item
    3D Mesh Unfolding via Semidefinite Programming
    (The Eurographics Association, 2017) Liu, Juncheng; Lian, Zhouhui; Xiao, Jianguo; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Mesh unfolding is a powerful pre-processing tool for many tasks such as non-rigid shape matching and retrieval. Shapes with articulated parts may exist large variants in pose, which brings difficulties to those tasks. With mesh unfolding, shapes in different poses can be transformed into similar canonical forms, which facilitates the subsequent applications. In this paper, we propose an automatic mesh unfolding algorithm based on semidefinite programming. The basic idea is to maximize the total variance of the vertex set for a given 3D mesh, while preserving the details by minimizing locally linear reconstruction errors. By optimizing a specifically-designed objective function, vertices tend to move against each other as far as possible, which leads to the unfolding operation. Compared to other Multi-Dimensional Scaling (MDS) based unfolding approaches, our method preserves significantly more details and requires no geodesic distance calculation. We demonstrate the advantages of our algorithm by performing 3D shape matching and retrieval in two publicly available datasets. Experimental results validate the effectiveness of our method both in visual judgment and quantitative comparison.
  • Item
    Semantic Correspondence Across 3D Models for Example-based Modeling
    (The Eurographics Association, 2017) Léon, Vincent; Itier, Vincent; Bonneel, Nicolas; Lavoué, Guillaume; Vandeborre, Jean-Philippe; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Modeling 3D shapes is a specialized skill not affordable to most novice artists due to its complexity and tediousness. At the same time, databases of complex models ready for use are becoming widespread, and can help the modeling task in a process called example-based modeling. We introduce such an example-based mesh modeling approach which, contrary to prior work, allows for the replacement of any localized region of a mesh by a region of similar semantics (but different geometry) within a mesh database. For that, we introduce a selection tool in a space of semantic descriptors that co-selects areas of similar semantics within the database. Moreover, this tool can be used for part-based retrieval across the database. Then, we show how semantic information improves the assembly process. This allows for modeling complex meshes from a coarse geometry and a database of more detailed meshes, and makes modeling accessible to the novice user.
  • Item
    Towards Recognizing of 3D Models Using A Single Image
    (The Eurographics Association, 2017) Rashwan, Hatem A.; Chambon, Sylvie; Morin, Geraldine; Gurdjos, Pierre; Charvillat, Vincent; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    As 3D data is getting more popular, techniques for retrieving a particular 3D model are necessary. We want to recognize a 3D model from a single photograph; as any user can easily get an image of a model he/she would like to find, requesting by an image is indeed simple and natural. However, a 2D intensity image is relative to viewpoint, texture and lighting condition and thus matching with a 3D geometric model is very challenging. This paper proposes a first step towards matching a 2D image to models, based on features repeatable in 2D images and in depth images (generated from 3D models); we show their independence to textures and lighting. Then, the detected features are matched to recognize 3D models by combining HOG (Histogram Of Gradients) descriptors and repeatability scores. The proposed methods reaches a recognition rate of 72% among 12 3D objects categories, and outperforms classical feature detection techniques for recognizing 3D models using a single image.
  • Item
    Directed Curvature Histograms for Robotic Grasping
    (The Eurographics Association, 2017) Schulz, Rodrigo; Guerrero, Pablo; Bustos, Benjamin; Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
    Three-dimensional descriptors are a common tool nowadays, used in a wide range of tasks. Most of the descriptors that have been proposed in the literature focus on tasks such as object recognition and identification. This paper proposes a novel three-dimensional local descriptor, structured as a set of histograms of the curvature observed on the surface of the object in different directions. This descriptor is designed with a focus on the resolution of the robotic grasping problem, especially on the determination of the orientation required to grasp an object. We validate our proposal following a data-driven approach using grasping information and examples generated using the Gazebo simulator and a simulated PR2 robot. Experimental results show that the proposed descriptor is well suited for the grasping problem, exceeding the performance observed with recent descriptors.