Li, MingweiJeong, SangwonLiu, ShusenBerger, MatthewAigner, WolfgangArchambault, DanielBujack, Roxana2024-05-212024-05-2120241467-8659https://doi.org/10.1111/cgf.15085https://diglib.eg.org/handle/10.1111/cgf15085We present concept-aligned neurons, or CAN, a visualization design for comparing deep neural networks. The goal of CAN is to support users in understanding the similarities and differences between neural networks, with an emphasis on comparing neuron functionality across different models. To make this comparison intuitive, CAN uses concept-based representations of neurons to visually align models in an interpretable manner. A key feature of CAN is the hierarchical organization of concepts, which permits users to relate sets of neurons at different levels of detail. CAN's visualization is designed to help compare the semantic coverage of neurons, as well as assess the distinctiveness, redundancy, and multi-semantic alignment of neurons or groups of neurons, all at different concept granularity. We demonstrate the generality and effectiveness of CAN by comparing models trained on different datasets, neural networks with different architectures, and models trained for different objectives, e.g. adversarial robustness, and robustness to out-of-distribution data.CCS Concepts: Human-centered computing → Visualization; Visual analyticsHuman centered computing → VisualizationVisual analyticsCAN: Concept-aligned Neurons for Visual Comparison of Neural Networks10.1111/cgf.1508512 pages