TY - JOUR
T1 - AID-U-Net
T2 - An Innovative Deep Convolutional Architecture for Semantic Segmentation of Biomedical Images
AU - Tashk, Ashkan
AU - Herp, Jürgen
AU - Bjørsum-Meyer, Thomas
AU - Koulaouzidis, Anastasios
AU - Nadimi, Esmaeil
PY - 2022/11/25
Y1 - 2022/11/25
N2 - Semantic segmentation of biomedical images found its niche in screening and diagnostic applications. Recent methods based on deep learning convolutional neural networks have been very effective, since they are readily adaptive to biomedical applications and outperform other competitive segmentation methods. Inspired by the U-Net, we designed a deep learning network with an innovative architecture, hereafter referred to as AID-U-Net. Our network consists of direct contracting and expansive paths, as well as a distinguishing feature of containing sub-contracting and sub-expansive paths. The implementation results on seven totally different databases of medical images demonstrated that our proposed network outperforms the state-of-the-art solutions with no specific pre-trained backbones for both 2D and 3D biomedical image segmentation tasks. Furthermore, we showed that AID-U-Net dramatically reduces time inference and computational complexity in terms of the number of learnable parameters. The results further show that the proposed AID-U-Net can segment different medical objects, achieving an improved 2D F1-score and 3D mean BF-score of 3.82% and 2.99%, respectively.
AB - Semantic segmentation of biomedical images found its niche in screening and diagnostic applications. Recent methods based on deep learning convolutional neural networks have been very effective, since they are readily adaptive to biomedical applications and outperform other competitive segmentation methods. Inspired by the U-Net, we designed a deep learning network with an innovative architecture, hereafter referred to as AID-U-Net. Our network consists of direct contracting and expansive paths, as well as a distinguishing feature of containing sub-contracting and sub-expansive paths. The implementation results on seven totally different databases of medical images demonstrated that our proposed network outperforms the state-of-the-art solutions with no specific pre-trained backbones for both 2D and 3D biomedical image segmentation tasks. Furthermore, we showed that AID-U-Net dramatically reduces time inference and computational complexity in terms of the number of learnable parameters. The results further show that the proposed AID-U-Net can segment different medical objects, achieving an improved 2D F1-score and 3D mean BF-score of 3.82% and 2.99%, respectively.
U2 - 10.3390/diagnostics12122952
DO - 10.3390/diagnostics12122952
M3 - Journal article
C2 - 36552959
SN - 2075-4418
VL - 12
JO - Diagnostics
JF - Diagnostics
IS - 12
M1 - 2952
ER -