KIN-FDNet:Dual-Branch KAN-INN Decomposition Network for Multi-Modality Image Fusion

dc.contributor.authorDong, Aimeien_US
dc.contributor.authorMeng, Haoen_US
dc.contributor.authorChen, Zhenen_US
dc.contributor.editorChristie, Marcen_US
dc.contributor.editorHan, Ping-Hsuanen_US
dc.contributor.editorLin, Shih-Syunen_US
dc.contributor.editorPietroni, Nicoen_US
dc.contributor.editorSchneider, Teseoen_US
dc.contributor.editorTsai, Hsin-Rueyen_US
dc.contributor.editorWang, Yu-Shuenen_US
dc.contributor.editorZhang, Eugeneen_US
dc.date.accessioned2025-10-07T06:03:44Z
dc.date.available2025-10-07T06:03:44Z
dc.date.issued2025
dc.description.abstractMulti-modality image fusion (MMIF) aims to integrate information from different source images to preserve the complementary information of each modality, such as feature highlights and texture details. However, current fusion methods fail to effectively address the inter-modality interference and feature redundancy issues. To address this issue, we propose an end-to-end dualbranch KAN-INN decomposition network (KIN-FDNet) with an effective feature decoupling mechanism for separating shared and specific features. It first employs a gated attention-based Transformer module for cross-modal shallow feature extraction. Then, we embed KAN into the Transformer architecture to extract low-frequency global features and solve the problem of low parameter efficiency in multi-branch models. Meanwhile, an invertible neural network (INN) processes high-frequency local information to preserve fine-grained modality-specific details. In addition, we design a dual-frequency cross-fusion module to promote information interaction between low and high frequencies to obtain high-quality fused images. Extensive experiments on visible infrared (VIF) and medical image fusion (MIF) tasks demonstrate the superior performance and generalization ability of our KIN-FDNet framework.en_US
dc.description.sectionheadersMulti-Modality
dc.description.seriesinformationPacific Graphics Conference Papers, Posters, and Demos
dc.identifier.doi10.2312/pg.20251280
dc.identifier.isbn978-3-03868-295-0
dc.identifier.pages8 pages
dc.identifier.urihttps://doi.org/10.2312/pg.20251280
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/pg20251280
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Image processing; Medical imaging
dc.subjectComputing methodologies → Image processing
dc.subjectMedical imaging
dc.titleKIN-FDNet:Dual-Branch KAN-INN Decomposition Network for Multi-Modality Image Fusionen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
pg20251280.pdf
Size:
4.77 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
paper1134_mm5.pdf
Size:
69.27 KB
Format:
Adobe Portable Document Format