Eilertsen, GabrielJönsson, DanielUnger, JonasYnnerman, AndersTominski, ChristianWaldner, ManuelaWang, Bei2024-05-172024-05-172024978-3-03868-251-6https://doi.org/10.2312/evs.20241068https://diglib.eg.org/handle/10.2312/evs20241068We present a neural network representation which can be used for visually analyzing the similarities and differences in a large corpus of trained neural networks. The focus is on architecture-invariant comparisons based on network weights, estimating similarities of the statistical footprints encoded by the training setups and stochastic optimization procedures. To make this possible, we propose a novel visual descriptor of neural network weights. The visual descriptor considers local weight statistics in a model-agnostic manner by encoding the distribution of weights over different model depths. We show how such a representation can extract descriptive information, is robust to different parameterizations of a model, and is applicable to different architecture specifications. The descriptor is used to create a model atlas by projecting a model library to a 2D representation, where clusters can be found based on similar weight properties. A cluster analysis strategy makes it possible to understand the weight properties of clusters and how these connect to the different datasets and hyper-parameters used to train the models.Attribution 4.0 International LicenseModel-invariant Weight Distribution Descriptors for Visual Exploration of Neural Networks en Masse10.2312/evs.202410685 pages