单位:[1]Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China[2]MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, China[3]Division of Cardiology, Department of Internal Medicine, Tongji Hospital, Tongji Medical College,Huazhong University of Science and Technology, Wuhan, Hubei, China内科学系大内科心血管内科华中科技大学同济医学院附属同济医院
Electrocardiograms (ECGs) and phonocardiograms (PCGs) are two modalities to provide complementary diagnostic information for improving the early detection accuracy of cardiovascular diseases (CVDs). Existing multi-modality methods mainly used the early or late feature fusion strategy, which did not simultaneously utilize the complementary information contained in low-level detail features and high-level semantic features of different modalities. Meanwhile, they were specially designed for the multi-modality scenario with both ECGs and PCGs, without considering the missing-modality scenarios with only ECGs or PCGs in clinical practice. To address these challenges, we developed a Co-learning-assisted Progressive Dense fusion network (CPDNet) for end-to-end CVD detection, with a three-branch interweaving architecture consisting of ECG and PCG modality-specific encoders and a progressive dense fusion encoder, which could be used for both multi-modality and missing-modality scenarios. Specifically, we designed a novel progressive dense fusion strategy, which not only progressively fused multi-level complementary information of different modalities from low-level details to high-level se -mantics, but also employed the dense fusion during feature fusion at each level to further enrich available multi-modality information through mutual guidance of features at different levels. Meanwhile, the strategy integrated cross-modality region-aware and multi-scale feature optimization modules to fully evaluate the contributions of different modalities and signal regions and enhance the feature extraction ability of the network for multi-scale target regions. Moreover, we designed a novel co-learning strategy to guide the learning process of the CPDNet by combining intra-modality and joint losses, which made each encoder well-trained. This strategy could not only assist our fusion strategy by making modality-specific encoders provide sufficiently discriminative features for the fusion encoder, but also enable the CPDNet to robustly handle missing-modality scenarios by independently using the corresponding modality-specific encoder. Experimental results on public and private datasets demonstrated that our method not only outperformed state-of-the-art multi-modality methods by at least 5.05% for average accuracy in the multi-modality scenario, but also achieved better performance than single-modality models in the missing-modality scenarios.
基金:
National Key R & D Program of China [2022YFE0200600]; National Natural Science Foundation of China [62006087]; Science Fund for Creative Research Group of China [61721092]; Wuhan National Laboratory for Optoelectronics (WNLO)
第一作者单位:[1]Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China[2]MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, China
共同第一作者:
通讯作者:
通讯机构:[1]Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China[2]MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, China[*1]Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China
推荐引用方式(GB/T 7714):
Zhang Haobo,Zhang Peng,Lin Fan,et al.Co-learning-assisted progressive dense fusion network for cardiovascular disease detection using ECG and PCG signals[J].EXPERT SYSTEMS WITH APPLICATIONS.2024,238:doi:10.1016/j.eswa.2023.122144.
APA:
Zhang, Haobo,Zhang, Peng,Lin, Fan,Chao, Lianying,Wang, Zhiwei...&Li, Qiang.(2024).Co-learning-assisted progressive dense fusion network for cardiovascular disease detection using ECG and PCG signals.EXPERT SYSTEMS WITH APPLICATIONS,238,
MLA:
Zhang, Haobo,et al."Co-learning-assisted progressive dense fusion network for cardiovascular disease detection using ECG and PCG signals".EXPERT SYSTEMS WITH APPLICATIONS 238.(2024)