3DOR 19

Permanent URI for this collection

Genova, Italy | May 5-6, 2019

Paper Session 1
POP: Full Parametric model Estimation for Occluded People
Riccardo Marin, Simone Melzi, Niloy J. Mitra, and Umberto Castellani
mpLBP: An Extension of the Local Binary Pattern to Surfaces based on an Efficient Coding of the Point Neighbours
Elia Moscoso Thompson, Silvia Biasotti, Julie Digne, and Raphaelle Chaine
Sketch-Aided Retrieval of Incomplete 3D Cultural Heritage Objects
Stefan Lengauer, Alexander Komar, Arniel Labrada, Stephan Karl, Elisabeth Trinkl, Reinhold Preiner, Benjamin Bustos, and Tobias Schreck
SHREC Session 1
Protein Shape Retrieval Contest
Florent Langenfeld, Apostolos Axenopoulos, Halim Benhabiles, Petros Daras, Andrea Giachetti, Xusi Han, Karim Hammoudi, Daisuke Kihara, Tuan M. Lai, Haiguang Liu, Mahmoud Melkemi, Stelios K. Mylonas, Genki Terashi, Yufan Wang, Feryal Windal, and Matthieu Montes
Extended 2D Scene Sketch-Based 3D Scene Retrieval
Juefei Yuan, Hameed Abdul-Rashid, Bo Li, Yijuan Lu, Tobias Schreck, Ngoc-Minh Bui, Trong-Le Do, Khac-Tuan Nguyen, Thanh-An Nguyen, Vinh-Tiep Nguyen, Minh-Triet Tran, and Tianyang Wang
Extended 2D Scene Image-Based 3D Scene Retrieval
Hameed Abdul-Rashid, Juefei Yuan, Bo Li, Yijuan Lu, Tobias Schreck, Ngoc-Minh Bui, Trong-Le Do, Mike Holenderski, Dmitri Jarnikov, Khiem T. Le, Vlado Menkovski, Khac-Tuan Nguyen, Thanh-An Nguyen, Vinh-Tiep Nguyen, Tu V. Ninh, Perez Rey, Minh-Triet Tran, and Tianyang Wang
Classification in Cryo-Electron Tomograms
Ilja Gubins, Gijs van der Schot, Remco C. Veltkamp, Friedrich Förster, Xuefeng Du, Xiangrui Zeng, Zhenxi Zhu, Lufan Chang, Min Xu, Emmanuel Moebel, Antonio Martinez-Sanchez, Charles Kervrann, Tuan M. Lai, Xusi Han, Genki Terashi, Daisuke Kihara, Benjamin A. Himes, Xiaohua Wan, Jingrong Zhang, Shan Gao, Yu Hao, Zhilong Lv, Xiaohua Wan, Zhidong Yang, Zijun Ding, Xuefeng Cui, and Fa Zhang
Paper Session 2
Depth-Based Face Recognition by Learning from 3D-LBP Images
Joao Baptista Cardia Neto, Aparecido Nilceu Marana, Claudio Ferrari, Stefano Berretti, and Alberto Del Bimbo
CMH: Coordinates Manifold Harmonics for Functional Remeshing
Riccardo Marin, Simone Melzi, Pietro Musoni, Filippo Bardon, Marco Tarini, and Umberto Castellani
Generalizing Discrete Convolutions for Unstructured Point Clouds
Alexandre Boulch
A 3D CAD Assembly Benchmark
Katia Lupinetti, Franca Giannini, Marina Monti, and Jean-Philippe Pernot
SHREC Session 2
Feature Curve Extraction on Triangle Meshes
E. Moscoso Thompson, G. Arvanitis, K. Moustakas, N. Hoang-Xuan, E. R. Nguyen, M. Tran, T. Lejemble, L. Barthe, N. Mellado, C. Romanengo, S. Biasotti, and B. Falcidieno
Online Gesture Recognition
F. M. Caputo, S. Burato, G. Pavan, T. Voillemin, H. Wannous, J. P. Vandeborre, M. Maghoumi, E. M. Taranta II, A. Razmjoo, J. J. LaViola Jr., F. Manganaro, S. Pini, G. Borghi, R. Vezzani, R. Cucchiara, H. Nguyen, M. T. Tran, and A. Giachetti
Monocular Image Based 3D Model Retrieval
Wenhui Li, Anan Liu, Weizhi Nie, Dan Song, Yuqian Li, Weijie Wang, Shu Xiang, Heyu Zhou, Ngoc-Minh Bui, Yunchi Cen, Zenian Chen, Huy-Hoang Chung-Nguyen, Gia-Han Diep, Trong-Le Do, Eugeni L. Doubrovski, Anh-Duc Duong, Jo M. P. Geraedts, Haobin Guo, Trung-Hieu Hoang, Yichen Li, Xing Liu, Zishun Liu, Duc-Tuan Luu, Yunsheng Ma, Vinh-Tiep Nguyen, Jie Nie, Tongwei Ren, Mai-Khiem Tran, Son-Thanh Tran-Nguyen, Minh-Triet Tran, The-Anh Vu-Le, Charlie C. L. Wang, Shijie Wang, Gangshan Wu, Caifei Yang, Meng Yuan, Hao Zhai, Ao Zhang, Fan Zhang, and Sicheng Zhao
Shape Correspondence with Isometric and Non-Isometric Deformations
R. M. Dyke, C. Stride, Y.-K. Lai, P. L. Rosin, M. Aubry, A. Boyarski, A. M. Bronstein, M. M. Bronstein, D. Cremers, M. Fisher, T. Groueix, D. Guo, V. G. Kim, R. Kimmel, Z. Lähner, K. Li, O. Litany, T. Remez, E. Rodolà, B. C. Russell, Y. Sahillioglu, R. Slossberg, G. K. L. Tam, M. Vestner, Z. Wu, and J. Yang
Matching Humans with Different Connectivity
S. Melzi, R. Marin, E. Rodolà, U. Castellani, J. Ren, A. Poulenard, P. Wonka, and M. Ovsjanikov

BibTeX (3DOR 19)
@inproceedings{
10.2312:3dor.20191055,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
POP: Full Parametric model Estimation for Occluded People}},
author = {
Marin, Riccardo
and
Melzi, Simone
and
Mitra, Niloy J.
and
Castellani, Umberto
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191055}
}
@inproceedings{
10.2312:3dor.20191056,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
mpLBP: An Extension of the Local Binary Pattern to Surfaces based on an Efficient Coding of the Point Neighbours}},
author = {
Moscoso Thompson, Elia
and
Biasotti, Silvia
and
Digne, Julie
and
Chaine, Raphaëlle
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191056}
}
@inproceedings{
10.2312:3dor.20191057,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Sketch-Aided Retrieval of Incomplete 3D Cultural Heritage Objects}},
author = {
Lengauer, Stefan
and
Komar, Alexander
and
Labrada, Arniel
and
Karl, Stephan
and
Trinkl, Elisabeth
and
Preiner, Reinhold
and
Bustos, Benjamin
and
Schreck, Tobias
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191057}
}
@inproceedings{
10.2312:3dor.20191058,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Protein Shape Retrieval Contest}},
author = {
Langenfeld, Florent
and
Axenopoulos, Apostolos
and
Melkemi, Mahmoud
and
Mylonas, Stelios K.
and
Terashi, Genki
and
Wang, Yufan
and
Windal, Feryal
and
Montes, Matthieu
and
Benhabiles, Halim
and
Daras, Petros
and
Giachetti, Andrea
and
Han, Xusi
and
Hammoudi, Karim
and
Kihara, Daisuke
and
Lai, Tuan M.
and
Liu, Haiguang
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191058}
}
@inproceedings{
10.2312:3dor.20191059,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Extended 2D Scene Sketch-Based 3D Scene Retrieval}},
author = {
Yuan, Juefei
and
Abdul-Rashid, Hameed
and
Tran, Minh-Triet
and
Wang, Tianyang
and
Li, Bo
and
Lu, Yijuan
and
Schreck, Tobias
and
Bui, Ngoc-Minh
and
Do, Trong-Le
and
Nguyen, Khac-Tuan
and
Nguyen, Thanh-An
and
Nguyen, Vinh-Tiep
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191059}
}
@inproceedings{
10.2312:3dor.20191060,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Extended 2D Scene Image-Based 3D Scene Retrieval}},
author = {
Abdul-Rashid, Hameed
and
Yuan, Juefei
and
Menkovski, Vlado
and
Nguyen, Khac-Tuan
and
Nguyen, Thanh-An
and
Nguyen, Vinh-Tiep
and
Ninh, Tu V.
and
Rey, Perez
and
Tran, Minh-Triet
and
Wang, Tianyang
and
Li, Bo
and
Lu, Yijuan
and
Schreck, Tobias
and
Bui, Ngoc-Minh
and
Do, Trong-Le
and
Holenderski, Mike
and
Jarnikov, Dmitri
and
Le, Khiem T.
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191060}
}
@inproceedings{
10.2312:3dor.20191061,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Classification in Cryo-Electron Tomograms}},
author = {
Gubins, Ilja
and
Schot, Gijs van der
and
Martinez-Sanchez, Antonio
and
Kervrann, Charles
and
Lai, Tuan M.
and
Han, Xusi
and
Terashi, Genki
and
Kihara, Daisuke
and
Himes, Benjamin A.
and
Wan, Xiaohua
and
Zhang, Jingrong
and
Gao, Shan
and
Veltkamp, Remco C.
and
Hao, Yu
and
Lv, Zhilong
and
Wan, Xiaohua
and
Yang, Zhidong
and
Ding, Zijun
and
Cui, Xuefeng
and
Zhang, Fa
and
Förster, Friedrich
and
Du, Xuefeng
and
Zeng, Xiangrui
and
Zhu, Zhenxi
and
Chang, Lufan
and
Xu, Min
and
Moebel, Emmanuel
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191061}
}
@inproceedings{
10.2312:3dor.20191063,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
CMH: Coordinates Manifold Harmonics for Functional Remeshing}},
author = {
Marin, Riccardo
and
Melzi, Simone
and
Musoni, Pietro
and
Bardon, Filippo
and
Tarini, Marco
and
Castellani, Umberto
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191063}
}
@inproceedings{
10.2312:3dor.20191062,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Depth-Based Face Recognition by Learning from 3D-LBP Images}},
author = {
Neto, Joao Baptista Cardia
and
Marana, Aparecido Nilceu
and
Ferrari, Claudio
and
Berretti, Stefano
and
Bimbo, Alberto Del
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191062}
}
@inproceedings{
10.2312:3dor.20191065,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
A 3D CAD Assembly Benchmark}},
author = {
Lupinetti, Katia
and
Giannini, Franca
and
monti, marina
and
PERNOT, Jean-Philippe
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191065}
}
@inproceedings{
10.2312:3dor.20191064,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Generalizing Discrete Convolutions for Unstructured Point Clouds}},
author = {
Boulch, Alexandre
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191064}
}
@inproceedings{
10.2312:3dor.20191067,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Online Gesture Recognition}},
author = {
Caputo, F. M.
and
Burato, S.
and
Manganaro, F.
and
Pini, S.
and
Borghi, G.
and
Vezzani, R.
and
Cucchiara, R.
and
Nguyen, H.
and
Tran, M. T.
and
Giachetti, A.
and
Pavan, G.
and
Voillemin, T.
and
Wannous, H.
and
Vandeborre, J. P.
and
Maghoumi, M.
and
Taranta II, E. M.
and
Razmjoo, A.
and
LaViola Jr., J. J.
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191067}
}
@inproceedings{
10.2312:3dor.20191066,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Feature Curve Extraction on Triangle Meshes}},
author = {
Moscoso Thompson, Elia
and
Arvanitis, G.
and
Biasotti, S.
and
FALCIDIENO, BIANCA
and
Moustakas, Konstantinos
and
Hoang-Xuan, N.
and
Nguyen, E. R.
and
Tran, M.
and
Lejemble, T.
and
Barthe, L.
and
Mellado, N.
and
Romanengo, C.
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191066}
}
@inproceedings{
10.2312:3dor.20191068,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Monocular Image Based 3D Model Retrieval}},
author = {
Li, Wenhui
and
Liu, Anan
and
Chen, Zenian
and
Chung-Nguyen, Huy-Hoang
and
Diep, Gia-Han
and
Do, Trong-Le
and
Doubrovski, Eugeni L.
and
Duong, Anh-Duc
and
Geraedts, Jo M. P.
and
Guo, Haobin
and
Hoang, Trung-Hieu
and
Li, Yichen
and
Nie, Weizhi
and
Liu, Xing
and
Liu, Zishun
and
Luu, Duc-Tuan
and
Ma, Yunsheng
and
Nguyen, Vinh-Tiep
and
Nie, Jie
and
Ren, Tongwei
and
Tran, Mai-Khiem
and
Tran-Nguyen, Son-Thanh
and
Tran, Minh-Triet
and
Song, Dan
and
Vu-Le, The-Anh
and
Wang, Charlie C. L.
and
Wang, Shijie
and
Wu, Gangshan
and
Yang, Caifei
and
Yuan, Meng
and
Zhai, Hao
and
Zhang, Ao
and
Zhang, Fan
and
Zhao, Sicheng
and
Li, Yuqian
and
Wang, Weijie
and
Xiang, Shu
and
Zhou, Heyu
and
Bui, Ngoc-Minh
and
Cen, Yunchi
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191068}
}
@inproceedings{
10.2312:3dor.20191069,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Shape Correspondence with Isometric and Non-Isometric Deformations}},
author = {
Dyke, R. M.
and
Stride, C.
and
Groueix, T.
and
Guo, D.
and
Kim, V. G.
and
Kimmel, R.
and
Lähner, Z.
and
Li, K.
and
Litany, O.
and
Remez, T.
and
Rodolà, E.
and
Russell, B. C.
and
Lai, Y.-K.
and
Sahillioglu, Y.
and
Slossberg, R.
and
Tam, G. K. L.
and
Vestner, M.
and
Wu, Z.
and
Yang, J.
and
Rosin, P. L.
and
Aubry, M.
and
Boyarski, A.
and
Bronstein, A. M.
and
Bronstein, M. M.
and
Cremers, D.
and
Fisher, M.
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191069}
}
@inproceedings{
10.2312:3dor.20191070,
booktitle = {
Eurographics Workshop on 3D Object Retrieval},
editor = {
Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
}, title = {{
Matching Humans with Different Connectivity}},
author = {
Melzi, S.
and
Marin, R.
and
Rodolà, E.
and
Castellani, U.
and
Ren, J.
and
Poulenard, A.
and
Wonka, P.
and
Ovsjanikov, M.
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1997-0471},
ISBN = {978-3-03868-077-2},
DOI = {
10.2312/3dor.20191070}
}

Browse

Recent Submissions

Now showing 1 - 17 of 17
  • Item
    3DOR 2019: Frontmatter
    (Eurographics Association, 2019) Biasotti, Silvia; Lavoué, Guillaume; Veltkamp, Remco; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
  • Item
    POP: Full Parametric model Estimation for Occluded People
    (The Eurographics Association, 2019) Marin, Riccardo; Melzi, Simone; Mitra, Niloy J.; Castellani, Umberto; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    In the last decades, we have witnessed advances in both hardware and associated algorithms resulting in unprecedented access to volumes of 2D and, more recently, 3D data capturing human movement. We are no longer satisfied with recovering human pose as an image-space 2D skeleton, but seek to obtain a full 3D human body representation. The main challenges in acquiring 3D human shape from such raw measurements are identifying which parts of the data relate to body measurements and recovering from partial observations, often arising out of severe occlusion. For example, a person occluded by a piece of furniture, or being self-occluded in a profile view. In this paper, we propose POP, a novel and efficient paradigm for estimation and completion of human shape to produce a full parametric 3D model directly from single RGBD images, even under severe occlusion. At the heart of our method is a novel human body pose retrieval formulation that explicitly models and handles occlusion. The retrieved result is then refined by a robust optimization to yield a full representation of the human shape. We demonstrate our method on a range of challenging real world scenarios and produce high-quality results not possible by competing alternatives. The method opens up exciting AR/VR application possibilities by working on 'in-the-wild' measurements of human motion.
  • Item
    mpLBP: An Extension of the Local Binary Pattern to Surfaces based on an Efficient Coding of the Point Neighbours
    (The Eurographics Association, 2019) Moscoso Thompson, Elia; Biasotti, Silvia; Digne, Julie; Chaine, Raphaëlle; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    The description of surface textures in terms of repeated colorimetric and geometric local surface variations is a crucial task for several applications, such as object interpretation or style identification. Recently, methods based on extensions to the surface meshes of the Local Binary Pattern (LBP) or the Scale-Invariant Feature Transform (SIFT) descriptors have been proposed for geometric and colorimetric pattern retrieval and classification. With respect to the previous works, we consider a novel LBPbased descriptor based on the assignment of the point neighbours into sectors of equal area and a non-uniform, multiple ring sampling. Our method is able to deal with surfaces represented as point clouds. Experiments on different benchmarks confirm the competitiveness of the method within the existing literature, in terms of accuracy and computational complexity.
  • Item
    Sketch-Aided Retrieval of Incomplete 3D Cultural Heritage Objects
    (The Eurographics Association, 2019) Lengauer, Stefan; Komar, Alexander; Labrada, Arniel; Karl, Stephan; Trinkl, Elisabeth; Preiner, Reinhold; Bustos, Benjamin; Schreck, Tobias; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    Due to advances in digitization technology, documentation efforts and digital library systems, increasingly large collections of visual Cultural Heritage (CH) object data becomes available, offering rich opportunities for domain analysis, e.g., for comparing, tracing and studying objects created over time. In principle, existing shape- and image-based similarity search methods can aid such domain analysis tasks. However, in practice, visual object data are given in different modalities, including 2D, 3D, sketches or conventional drawings like profile sections or unwrappings. In addition, collections may be distributed across different publications and repositories, posing a challenge for implementing encompassing search and analysis systems. We introduce a methodology and system for cross-modal visual search in CH object data. Specifically, we propose a new query modality based on 3D views enhanced by user sketches (3D+sketch). This allows for adding new context to the search, which is useful e.g., for searching based on incomplete query objects, or for testing hypotheses on existence of certain shapes in a collection. We present an appropriately designed workflow for constructing query views from incomplete 3D objects enhanced by a user sketch based on shape completion and texture inpainting. Visual cues additionally help users compare retrieved objects with the query. We apply our method on a set of relevant 3D and view-based CH object data, demonstrating the feasibility of our approach and its potential to support analysis of domain experts in Archaeology and the field of CH in general.
  • Item
    Protein Shape Retrieval Contest
    (The Eurographics Association, 2019) Langenfeld, Florent; Axenopoulos, Apostolos; Benhabiles, Halim; Daras, Petros; Giachetti, Andrea; Han, Xusi; Hammoudi, Karim; Kihara, Daisuke; Lai, Tuan M.; Liu, Haiguang; Melkemi, Mahmoud; Mylonas, Stelios K.; Terashi, Genki; Wang, Yufan; Windal, Feryal; Montes, Matthieu; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    This track aimed at retrieving protein evolutionary classification based on their surfaces meshes only. Given that proteins are dynamic, non-rigid objects and that evolution tends to conserve patterns related to their activity and function, this track offers a challenging issue using biologically relevant molecules. We evaluated the performance of 5 different algorithms and analyzed their ability, over a dataset of 5,298 objects, to retrieve various conformations of identical proteins and various conformations of ortholog proteins (proteins from different organisms and showing the same activity). All methods were able to retrieve a member of the same class as the query in at least 94% of the cases when considering the first match, but show more divergent when more matches were considered. Last, similarity metrics trained on databases dedicated to proteins improved the results.
  • Item
    Extended 2D Scene Sketch-Based 3D Scene Retrieval
    (The Eurographics Association, 2019) Yuan, Juefei; Abdul-Rashid, Hameed; Li, Bo; Lu, Yijuan; Schreck, Tobias; Bui, Ngoc-Minh; Do, Trong-Le; Nguyen, Khac-Tuan; Nguyen, Thanh-An; Nguyen, Vinh-Tiep; Tran, Minh-Triet; Wang, Tianyang; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    Sketch-based 3D scene retrieval is to retrieve 3D scene models given a user's hand-drawn 2D scene sketch. It is a brand new but also very challenging research topic in the field of 3D object retrieval due to the semantic gap in their representations: 3D scene models or views differ from non-realistic 2D scene sketches. To boost this interesting research, we organized a 2D Scene Sketch-Based 3D Scene Retrieval track in SHREC'18, resulting a SceneSBR18 benchmark which contains 10 scene classes. In order to make it more comprehensive, we have extended the number of the scene categories from the initial 10 classes in the SceneSBR2018 benchmark to 30 classes, resulting in a new and more challenging benchmark SceneSBR2019 which has 750 2D scene sketches and 3,000 3D scene models. Therefore, the objective of this track is to further evaluate the performance and scalability of different 2D scene sketch-based 3D scene model retrieval algorithms using this extended and more comprehensive new benchmark. In this track, two groups from USA and Vietnam have successfully submitted 4 runs. Based on 7 commonly used retrieval metrics, we evaluate their retrieval performance. We have also conducted a comprehensive analysis and discussion of these methods and proposed several future research directions to deal with this challenging research topic. Deep learning techniques have been proved their great potentials again in dealing with this challenging retrieval task, in terms of both retrieval accuracy and scalability to a larger dataset. We hope this publicly available benchmark, together with its evaluation results and source code, will further enrich and promote 2D scene sketch-based 3D scene retrieval research area and its corresponding applications.
  • Item
    Extended 2D Scene Image-Based 3D Scene Retrieval
    (The Eurographics Association, 2019) Abdul-Rashid, Hameed; Yuan, Juefei; Li, Bo; Lu, Yijuan; Schreck, Tobias; Bui, Ngoc-Minh; Do, Trong-Le; Holenderski, Mike; Jarnikov, Dmitri; Le, Khiem T.; Menkovski, Vlado; Nguyen, Khac-Tuan; Nguyen, Thanh-An; Nguyen, Vinh-Tiep; Ninh, Tu V.; Rey, Perez; Tran, Minh-Triet; Wang, Tianyang; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    In the months following our SHREC 2018 - 2D Scene Image-Based 3D Scene Retrieval (SceneIBR2018) track, we have extended the number of the scene categories from the initial 10 classes in the SceneIBR2018 benchmark to 30 classes, resulting in a new benchmark SceneIBR2019 which has 30,000 scene images and 3,000 3D scene models. For that reason, we seek to further evaluate the performance of existing and new 2D scene image-based 3D scene retrieval algorithms using this extended and more comprehensive new benchmark. Three groups from the Netherlands, the United States and Vietnam participated and collectively submitted eight runs. This report documents the evaluation of each method based on seven performance metrics, offers an indepth discussion as well as analysis on the methods employed and discusses future directions that have the potential to address this task. Again, deep learning techniques have demonstrated notable performance in terms of both accuracy and scalability when applied to this exigent retrieval task. To further enrich the current state of 3D scene understanding and retrieval, our evaluation toolkit, all participating methods' results and the comprehensive 2D/3D benchmark have all been made publicly available.
  • Item
    Classification in Cryo-Electron Tomograms
    (The Eurographics Association, 2019) Gubins, Ilja; Schot, Gijs van der; Veltkamp, Remco C.; Förster, Friedrich; Du, Xuefeng; Zeng, Xiangrui; Zhu, Zhenxi; Chang, Lufan; Xu, Min; Moebel, Emmanuel; Martinez-Sanchez, Antonio; Kervrann, Charles; Lai, Tuan M.; Han, Xusi; Terashi, Genki; Kihara, Daisuke; Himes, Benjamin A.; Wan, Xiaohua; Zhang, Jingrong; Gao, Shan; Hao, Yu; Lv, Zhilong; Wan, Xiaohua; Yang, Zhidong; Ding, Zijun; Cui, Xuefeng; Zhang, Fa; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    Different imaging techniques allow us to study the organization of life at different scales. Cryo-electron tomography (cryo-ET) has the ability to three-dimensionally visualize the cellular architecture as well as the structural details of macro-molecular assemblies under near-native conditions. Due to beam sensitivity of biological samples, an inidividual tomogram has a maximal resolution of 5 nanometers. By averaging volumes, each depicting copies of the same type of a molecule, resolutions beyond 4 Å have been achieved. Key in this process is the ability to localize and classify the components of interest, which is challenging due to the low signal-to-noise ratio. Innovation in computational methods remains key to mine biological information from the tomograms. To promote such innovation, we organize this SHREC track and provide a simulated dataset with the goal of establishing a benchmark in localization and classification of biological particles in cryo-electron tomograms. The publicly available dataset contains ten reconstructed tomograms obtained from a simulated cell-like volume. Each volume contains twelve different types of proteins, varying in size and structure. Participants had access to 9 out of 10 of the cell-like ground-truth volumes for learning-based methods, and had to predict protein class and location in the test tomogram. Five groups submitted eight sets of results, using seven different methods. While our sample size gives only an anecdotal overview of current approaches in cryo-ET classification, we believe it shows trends and highlights interesting future work areas. The results show that learning-based approaches is the current trend in cryo-ET classification research and specifically end-to-end 3D learning-based approaches achieve the best performance.
  • Item
    CMH: Coordinates Manifold Harmonics for Functional Remeshing
    (The Eurographics Association, 2019) Marin, Riccardo; Melzi, Simone; Musoni, Pietro; Bardon, Filippo; Tarini, Marco; Castellani, Umberto; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    In digital world reconstruction, 2-dimensional surface of real objects are often obtained as polygonal meshes after an acquisition procedure using 3D sensors. However, such representation requires several manual efforts from highly experts to correct the irregularity of tessellation and make it suitable for professional applications, such as those in the gaming or movie industry. Moreover, for modelling and animation purposes it is often required that the same connectivity is shared among two or more different shapes. In this paper we propose a new method that exploits a remeshing-by-matching approach where the observed noisy shape inherits a regular tessellation from a target shape which already satisfies the professional constraints. A fully automatic pipeline is introduced based on a variation of the functional mapping framework. In particular, a new set of basis functions, namely the Coordinates Manifold Harmonics (CMH), is properly designed for this tessellation transfer task. In our experiments an exhaustive quantitative and quality evaluation is reported for human body shapes in T-pose where the effectiveness of the proposed functional remeshing is clearly shown in comparison with other methods.
  • Item
    Depth-Based Face Recognition by Learning from 3D-LBP Images
    (The Eurographics Association, 2019) Neto, Joao Baptista Cardia; Marana, Aparecido Nilceu; Ferrari, Claudio; Berretti, Stefano; Bimbo, Alberto Del; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    In this paper, we propose a hybrid framework for face recognition from depth images, which is both effective and efficient. It consists of two main stages: First, the 3DLBP operator is applied to the raw depth data of the face, and used to build the corresponding descriptor images (DIs). However, such operator quantizes relative depth differences over/under +-7 to the same bin, so as to generate a fixed dimensional descriptor. To account for this behavior, we also propose a modification of the traditional operator that encodes depth differences using a sigmoid function. Then, a not-so-deep (shallow) convolutional neural network (SCNN) has been designed that learns from the DIs. This architecture showed two main advantages over the direct application of deep-CNN (DCNN) to depth images of the face: On the one hand, the DIs are capable of enriching the raw depth data, emphasizing relevant traits of the face, while reducing their acquisition noise. This resulted decisive in improving the learning capability of the network; On the other, the DIs capture low-level features of the face, thus playing the role for the SCNN as the first layers do in a DCNN architecture. In this way, the SCNN we have designed has much less layers and can be trained more easily and faster. Extensive experiments on low- and high-resolution depth face datasets confirmed us the above advantages, showing results that are comparable or superior to the state-of-the-art, using by far less training data, time, and memory occupancy of the network.
  • Item
    A 3D CAD Assembly Benchmark
    (The Eurographics Association, 2019) Lupinetti, Katia; Giannini, Franca; monti, marina; PERNOT, Jean-Philippe; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    Evaluating the effectiveness of the systems for the retrieval of 3D assembly models is not trivial. CAD assembly models can be considered similar according to different criteria and at different levels (i.e. globally or partially). Indeed, besides the shape criterion, CAD assembly models have further characteristic elements, such as the mutual position of parts, or the type of connecting joint. Thus, when retrieving 3D models, these characteristics can match in the entire model (globally) or just in local subparts (partially). The available 3D model repositories do not include complex CAD assembly models and, generally, they are suitable to evaluate one characteristic at a time and neglecting important properties in the evaluation of assembly similarity. In this paper, we present a benchmark for the evaluation of content-retrieval systems of 3D assembly models. A crucial feature of this benchmark regards its ability to consider the various aspects characterizing the models of mechanical assemblies.
  • Item
    Generalizing Discrete Convolutions for Unstructured Point Clouds
    (The Eurographics Association, 2019) Boulch, Alexandre; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    Point clouds are unstructured and unordered data, as opposed to images. Thus, most of machine learning approaches, developed for images, cannot be directly transferred to point clouds. It usually requires data transformation such as voxelization, inducing a possible loss of information. In this paper, we propose a generalization of the discrete convolutional neural networks (CNNs) able to deal with sparse input point cloud. We replace the discrete kernels by continuous ones. The formulation is simple, does not set the input point cloud size and can easily be used for neural network design similarly to 2D CNNs. We present experimental results, competitive with the state of the art, on shape classification, part segmentation and semantic segmentation for large scale clouds.
  • Item
    Online Gesture Recognition
    (The Eurographics Association, 2019) Caputo, F. M.; Burato, S.; Pavan, G.; Voillemin, T.; Wannous, H.; Vandeborre, J. P.; Maghoumi, M.; Taranta II, E. M.; Razmjoo, A.; LaViola Jr., J. J.; Manganaro, F.; Pini, S.; Borghi, G.; Vezzani, R.; Cucchiara, R.; Nguyen, H.; Tran, M. T.; Giachetti, A.; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    This paper presents the results of the Eurographics 2019 SHape Retrieval Contest track on online gesture recognition. The goal of this contest was to test state-of-the-art methods that can be used to online detect command gestures from hands' movements tracking on a basic benchmark where simple gestures are performed interleaving them with other actions. Unlike previous contests and benchmarks on trajectory-based gesture recognition, we proposed an online gesture recognition task, not providing pre-segmented gestures, but asking the participants to find gestures within recorded trajectories. The results submitted by the participants show that an online detection and recognition of sets of very simple gestures from 3D trajectories captured with a cheap sensor can be effectively performed. The best methods proposed could be, therefore, directly exploited to design effective gesture-based interfaces to be used in different contexts, from Virtual and Mixed reality applications to the remote control of home devices.
  • Item
    Feature Curve Extraction on Triangle Meshes
    (The Eurographics Association, 2019) Moscoso Thompson, Elia; Arvanitis, G.; Moustakas, Konstantinos; Hoang-Xuan, N.; Nguyen, E. R.; Tran, M.; Lejemble, T.; Barthe, L.; Mellado, N.; Romanengo, C.; Biasotti, S.; FALCIDIENO, BIANCA; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    This paper presents the results of the SHREC'19 track: Feature curve extraction on triangle meshes. Given a model, the challenge consists in automatically extracting a subset of the mesh vertices that jointly represent a feature curve. As an optional task, participants were requested to send also a similarity evaluation among the feature curves extracted. The various approaches presented by the participants are discussed, together with their results. The proposed methods highlight different points of view of the problem of feature curve extraction. It is interesting to see that it is possible to deal with this problem with good results, despite the different approaches.
  • Item
    Monocular Image Based 3D Model Retrieval
    (The Eurographics Association, 2019) Li, Wenhui; Liu, Anan; Nie, Weizhi; Song, Dan; Li, Yuqian; Wang, Weijie; Xiang, Shu; Zhou, Heyu; Bui, Ngoc-Minh; Cen, Yunchi; Chen, Zenian; Chung-Nguyen, Huy-Hoang; Diep, Gia-Han; Do, Trong-Le; Doubrovski, Eugeni L.; Duong, Anh-Duc; Geraedts, Jo M. P.; Guo, Haobin; Hoang, Trung-Hieu; Li, Yichen; Liu, Xing; Liu, Zishun; Luu, Duc-Tuan; Ma, Yunsheng; Nguyen, Vinh-Tiep; Nie, Jie; Ren, Tongwei; Tran, Mai-Khiem; Tran-Nguyen, Son-Thanh; Tran, Minh-Triet; Vu-Le, The-Anh; Wang, Charlie C. L.; Wang, Shijie; Wu, Gangshan; Yang, Caifei; Yuan, Meng; Zhai, Hao; Zhang, Ao; Zhang, Fan; Zhao, Sicheng; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    Monocular image based 3D object retrieval is a novel and challenging research topic in the field of 3D object retrieval. Given a RGB image captured in real world, it aims to search for relevant 3D objects from a dataset. To advance this promising research, we organize this SHREC track and build the first monocular image based 3D object retrieval benchmark by collecting 2D images from ImageNet and 3D objects from popular 3D datasets such as NTU, PSB, ModelNet40 and ShapeNet. The benchmark contains classified 21,000 2D images and 7,690 3D objects of 21 categories. This track attracted 9 groups from 4 countries and the submission of 20 runs. To have a comprehensive comparison, 7 commonly-used retrieval performance metrics have been used to evaluate their retrieval performance. The evaluation results show that the supervised cross domain learning get the superior retrieval performance (Best NN is 97.4 %) by bridging the domain gap with label information. However, there is still a big challenge for unsupervised cross domain learning (Best NN is 61.2%), which is more practical for the real application. Although we provided both view images and OBJ file for each 3D model, all the participants use the view images to represent the 3D model. One of the interesting work in the future is directly using the 3D information and 2D RGB information to solve the task of monocular Image based 3D model retrieval.
  • Item
    Shape Correspondence with Isometric and Non-Isometric Deformations
    (The Eurographics Association, 2019) Dyke, R. M.; Stride, C.; Lai, Y.-K.; Rosin, P. L.; Aubry, M.; Boyarski, A.; Bronstein, A. M.; Bronstein, M. M.; Cremers, D.; Fisher, M.; Groueix, T.; Guo, D.; Kim, V. G.; Kimmel, R.; Lähner, Z.; Li, K.; Litany, O.; Remez, T.; Rodolà, E.; Russell, B. C.; Sahillioglu, Y.; Slossberg, R.; Tam, G. K. L.; Vestner, M.; Wu, Z.; Yang, J.; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    The registration of surfaces with non-rigid deformation, especially non-isometric deformations, is a challenging problem. When applying such techniques to real scans, the problem is compounded by topological and geometric inconsistencies between shapes. In this paper, we capture a benchmark dataset of scanned 3D shapes undergoing various controlled deformations (articulating, bending, stretching and topologically changing), along with ground truth correspondences. With the aid of this tiered benchmark of increasingly challenging real scans, we explore this problem and investigate how robust current state-of- the-art methods perform in different challenging registration and correspondence scenarios. We discover that changes in topology is a challenging problem for some methods and that machine learning-based approaches prove to be more capable of handling non-isometric deformations on shapes that are moderately similar to the training set.
  • Item
    Matching Humans with Different Connectivity
    (The Eurographics Association, 2019) Melzi, S.; Marin, R.; Rodolà, E.; Castellani, U.; Ren, J.; Poulenard, A.; Wonka, P.; Ovsjanikov, M.; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, Remco
    Objects Matching is a ubiquitous problem in computer science with particular relevance for many applications; property transfer between 3D models and statistical study for learning are just some remarkable examples. The research community spent a lot of effort to address this problem, and a large and increased set of innovative methods has been proposed for its solution. In order to provide a fair comparison among these methods, different benchmarks have been proposed. However, all these benchmarks are domain specific, e.g., real scans coming from the same acquisition pipeline, or synthetic watertight meshes with the same triangulation. To the best of our knowledge, no cross-dataset comparisons have been proposed to date. This track provides the first matching evaluation in terms of large connectivity changes between models that come from totally different modeling methods. We provide a dataset of 44 shapes with dense correspondence as obtained by a highly accurate shape registration method (FARM). Our evaluation proves that connectivity changes lead to Objects Matching difficulties and we hope this will promote further research in matching shapes with wildly different connectivity.