高级检索
当前位置: 首页 > 详情页

ISA-Net: Improved spatial attention network for PET-CT tumor segmentation

文献详情

资源类型:
WOS体系:
Pubmed体系:

收录情况: ◇ SCIE

单位: [1]Chinese Acad Sci, Shenzhen Inst Adv Technol, Lauterbur Res Ctr Biomed Imaging, Shenzhen 518055, Peoples R China [2]Univ Chinese Acad Sci, Beijing 101408, Peoples R China [3]Huazhong Univ Sci & Technol, Tongji Hosp, Tongji Med Coll, Dept Nucl Med & PET, Wuhan 430000, Peoples R China [4]Chinese Acad Sci, Key Lab Hlth Informat, Shenzhen 518055, Peoples R China [5]Chinese Acad Sci, Brain Cognit & Brain Dis Inst BCBDI, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China [6]Shenzhen Fundamental Res Inst, Shenzhen Hong Kong Inst Brain Sci, Shenzhen 518055, Peoples R China [7]United Imaging Res Inst Innovat Med Equipment, Shenzhen 518045, Peoples R China [8]Shenzhen Bay Lab, Inst Biomed Engn, Shenzhen 518118, Peoples R China
出处:
ISSN:

关键词: Tumor segmentation Multimodal PET -CT Deep learning Attention network

摘要:
Background and Objective: Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heav-ily on the experience and knowledge of these experts. In addition, there is much intra-and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tu-mor target regions. Methods: In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial atten-tion network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location informa-tion and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differ-ences and complementarities between PET and CT. Results: We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. Conclusions: The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.(c) 2022 Elsevier B.V. All rights reserved.

基金:
语种:
被引次数:
WOS:
PubmedID:
中科院(CAS)分区:
出版当年[2021]版:
大类 | 2 区 工程技术
小类 | 2 区 计算机:理论方法 2 区 工程:生物医学 3 区 计算机:跨学科应用 3 区 医学:信息
最新[2025]版:
大类 | 2 区 医学
小类 | 2 区 计算机:跨学科应用 2 区 计算机:理论方法 2 区 工程:生物医学 3 区 医学:信息
JCR分区:
出版当年[2020]版:
Q1 COMPUTER SCIENCE, THEORY & METHODS Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Q1 ENGINEERING, BIOMEDICAL Q1 MEDICAL INFORMATICS
最新[2023]版:
Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Q1 COMPUTER SCIENCE, THEORY & METHODS Q1 ENGINEERING, BIOMEDICAL Q1 MEDICAL INFORMATICS

影响因子: 最新[2023版] 最新五年平均 出版当年[2020版] 出版当年五年平均 出版前一年[2019版] 出版后一年[2021版]

第一作者:
第一作者单位: [1]Chinese Acad Sci, Shenzhen Inst Adv Technol, Lauterbur Res Ctr Biomed Imaging, Shenzhen 518055, Peoples R China [2]Univ Chinese Acad Sci, Beijing 101408, Peoples R China
通讯作者:
通讯机构: [1]Chinese Acad Sci, Shenzhen Inst Adv Technol, Lauterbur Res Ctr Biomed Imaging, Shenzhen 518055, Peoples R China [4]Chinese Acad Sci, Key Lab Hlth Informat, Shenzhen 518055, Peoples R China
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:432 今日访问量:0 总访问量:412 更新日期:2025-04-01 建议使用谷歌、火狐浏览器 常见问题

版权所有:重庆聚合科技有限公司 渝ICP备12007440号-3 地址:重庆市两江新区泰山大道西段8号坤恩国际商务中心16层(401121)