Kobayashi, YukiYamaguchi, YasushiCaputo, ArielGarro, ValeriaGiachetti, AndreaCastellani, UmbertoDulecha, Tinsae Gebrechristos2024-11-112024-11-112024978-3-03868-265-32617-4855https://doi.org/10.2312/stag.20241337https://diglib.eg.org/handle/10.2312/stag20241337The application of AI technology in domains requiring decision accountability, such as healthcare, has increased the demand for model interpretability. The part-prototype model is a well-established interpretable approach for image recognition, with PIP-Net demonstrating strong classification performance and high interpretability in multiclass classification tasks. However, PIP-Net assumes the presence of class-specific prototypes. This assumption does not hold for tasks like anomaly detection, where no local features are exclusive to the normal class. To address this, we propose an architecture that learns only the scores corresponding to the anomaly class for each prototype. This approach is based on more reasonable assumptions for anomaly detection than PIP-Net and enables concise inference using fewer prototypes. Evaluation of this approach using the MURA dataset, a large dataset of bone X-rays, revealed that the proposed architecture achieved better anomaly detection performance than the original PIP-Net with fewer prototypes.Attribution 4.0 International LicenseA Simple Improvement to PIP-Net for Medical Image Anomaly Detection10.2312/stag.202413377 pages