Gregor, RobertMayrbrugger, ChristofMavridis, PavlosBustos, BenjaminSchreck, TobiasTobias Schreck and Tim Weyrich and Robert Sablatnig and Benjamin Stular2017-09-272017-09-272017978-3-03868-037-62312-6124https://doi.org/10.2312/gch.20171302https://diglib.eg.org:443/handle/10.2312/gch20171302Digitization of Cultural Heritage (CH) Objects is indispensable for many tasks including preservation, distributions and analysis of CH content. While digitization of 3D shape and appearance is progressing rapidly, much more digitized content is available in the form of 2D images, photographs, or sketches. A key functionality for exploring CH content is the ability to search for objects of interest. Search in CH repositories is often relying on meta-data of available objects. Also, methods for searching based on content in a given modality, e.g., using image or shape descriptors, are researched. To date, few works have addressed the problem of content-based cross-modal search in both 2D and 3D object space without the requirement of meta data annotations of similar format and quality. We propose a cross-modal search approach relying on content-based similarity between 3D and 2D CH objects. Our approach converts a 3D query object into a 2D query image and then executes content-based search relying on visual descriptors. We describe our concept and show first results of our approach that were obtained on a pottery dataset. We also outline directions of future work.Cross-modal Content-based Retrieval for Digitized 2D and 3D Cultural Heritage Artifacts10.2312/gch.20171302119-123