Dong, AimeiMeng, HaoChen, ZhenChristie, MarcHan, Ping-HsuanLin, Shih-SyunPietroni, NicoSchneider, TeseoTsai, Hsin-RueyWang, Yu-ShuenZhang, Eugene2025-10-072025-10-072025978-3-03868-295-0https://doi.org/10.2312/pg.20251280https://diglib.eg.org/handle/10.2312/pg20251280Multi-modality image fusion (MMIF) aims to integrate information from different source images to preserve the complementary information of each modality, such as feature highlights and texture details. However, current fusion methods fail to effectively address the inter-modality interference and feature redundancy issues. To address this issue, we propose an end-to-end dualbranch KAN-INN decomposition network (KIN-FDNet) with an effective feature decoupling mechanism for separating shared and specific features. It first employs a gated attention-based Transformer module for cross-modal shallow feature extraction. Then, we embed KAN into the Transformer architecture to extract low-frequency global features and solve the problem of low parameter efficiency in multi-branch models. Meanwhile, an invertible neural network (INN) processes high-frequency local information to preserve fine-grained modality-specific details. In addition, we design a dual-frequency cross-fusion module to promote information interaction between low and high frequencies to obtain high-quality fused images. Extensive experiments on visible infrared (VIF) and medical image fusion (MIF) tasks demonstrate the superior performance and generalization ability of our KIN-FDNet framework.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Image processing; Medical imagingComputing methodologies → Image processingMedical imagingKIN-FDNet:Dual-Branch KAN-INN Decomposition Network for Multi-Modality Image Fusion10.2312/pg.202512808 pages