KIN-FDNet:Dual-Branch KAN-INN Decomposition Network for Multi-Modality Image Fusion

Loading...
Thumbnail Image
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Multi-modality image fusion (MMIF) aims to integrate information from different source images to preserve the complementary information of each modality, such as feature highlights and texture details. However, current fusion methods fail to effectively address the inter-modality interference and feature redundancy issues. To address this issue, we propose an end-to-end dualbranch KAN-INN decomposition network (KIN-FDNet) with an effective feature decoupling mechanism for separating shared and specific features. It first employs a gated attention-based Transformer module for cross-modal shallow feature extraction. Then, we embed KAN into the Transformer architecture to extract low-frequency global features and solve the problem of low parameter efficiency in multi-branch models. Meanwhile, an invertible neural network (INN) processes high-frequency local information to preserve fine-grained modality-specific details. In addition, we design a dual-frequency cross-fusion module to promote information interaction between low and high frequencies to obtain high-quality fused images. Extensive experiments on visible infrared (VIF) and medical image fusion (MIF) tasks demonstrate the superior performance and generalization ability of our KIN-FDNet framework.
Description

CCS Concepts: Computing methodologies → Image processing; Medical imaging

        
@inproceedings{
10.2312:pg.20251280
, booktitle = {
Pacific Graphics Conference Papers, Posters, and Demos
}, editor = {
Christie, Marc
and
Han, Ping-Hsuan
and
Lin, Shih-Syun
and
Pietroni, Nico
and
Schneider, Teseo
and
Tsai, Hsin-Ruey
and
Wang, Yu-Shuen
and
Zhang, Eugene
}, title = {{
KIN-FDNet:Dual-Branch KAN-INN Decomposition Network for Multi-Modality Image Fusion
}}, author = {
Dong, Aimei
and
Meng, Hao
and
Chen, Zhen
}, year = {
2025
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-295-0
}, DOI = {
10.2312/pg.20251280
} }
Citation