Ho, Yi-HsuanWay, Der-LorShih, Zen-ChungChaine, RaphaƫlleDeng, ZhigangKim, Min H.2023-10-092023-10-0920231467-8659https://doi.org/10.1111/cgf.14947https://diglib.eg.org:443/handle/10.1111/cgf14947Sketch-based image retrieval (SBIR) is an emerging task in computer vision. Research interests have arisen in solving this problem under the realistic and challenging setting of zero-shot learning. Given a sketch as a query, the search goal is to retrieve the corresponding photographs in a zero-shot scenario. In this paper, we divide the aforementioned challenging work into three tasks and propose a sharing model framework that addresses these problems. First, the weights of the proposed sharing model effectively reduced the modality gap between sketches and photographs. Second, semantic information was used to handle different label spaces during the training and testing stages. The sketch and photograph domains share semantic information. Finally, a memory mechanism is used to reduce the intrinsic variety in sketches, even if they all belong to the same class. Sketches and photographs dominate the embeddings in turn. Because sketches are not limited by language, our ultimate goal is to find a method to replace text searches. We also designed a demonstration program to demonstrate the use of the proposed method in real-world applications. Our results indicate that the proposed method exhibits considerably higher zero-shot SBIR performance than do other state-of-the-art methods on the challenging Sketchy, TU-Berlin, and QuickDraw datasets.CCS Concepts: Information system -> Information retrieval; Computing methodologies -> Machine learningInformation systemInformation retrievalComputing methodologiesMachine learningSharing Model Framework for Zero-Shot Sketch-Based Image Retrieval10.1111/cgf.1494712 pages