Comparing Objective Functions for Segmentation and Detection of Microaneurysms in Retinal Images

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Retinal microaneurysms (MAs) are the earliest signs of diabetic retinopathy (DR) whichis the leading cause of blindness among the working aged population in the western world.Detection of MAs present a particular challenge as MA pixels account for less than 0.5% ofthe retinal image. In deep neural networks the learning process can be adversely affectedby imbalance which introduces a bias towards the most well represented class. Recently, anumber of objective functions have been proposed as alternatives to the standard Crossentropy (CE) loss in efforts to combat this problem. In this work we investigate the influenceof the network objective during optimization by comparing Residual U-nets trained forsegmentation of MAs in retinal images using six different objective functions; weighted andunweighted CE, Dice loss, weighted and unweighted Focal loss and Focal Tversky loss. Wealso perform test with the CE objective using a more complex model. Three networks withdifferent seeds are trained for each objective function using optimized hyper-parametersettings on a dataset of 382 images with pixel level annotations for MAs. Instance levelMA detection performance is evaluated with the average free response receiver operatorcharacteristic (FROC) score calculated as the mean sensitivity at seven average false positives per image thresholds on 80 test images. The image level MA detection performanceand detection of low levels of DR is evaluated with bootstrapped AUC scores on the sameimages and a separate test set of 1287 images. Significance test for image level detectionaccuracy (α = 0.05) is performed using Cochran’s Q and McNemar’s test. Segmentationperformance is evaluated with the average pixel precision (AP) score. For instance leveldetection and pixel segmentation we perform repeated measures ANOVA with Post-Hoctests. Results: Losses based on the CE index perform significantly better than the Diceand Focal Tversky loss for instance level detection and pixel segmentation. The highestFROC score of 0.5448 (±0.0096) and AP of 0.4888 (±0.0196) is achieved using weightedCE. For all objectives excluding the Focal Tversky loss (AUC = 0.5) there is no significantdifference for image level detection accuracy on the 80 image test set. The highest AUC of0.993 (95% CI: 0.980 - 1.0) is achieved using the Focal loss. For detection of mild DR on theset of 1287 images there is a significant difference between model objectives (p = 2.87e−12).An AUC of 0.730 (95% CI: 0.683 - 0.745 is achieved using the complex model with CE.Using the Focal Tversky objective we fail to detect any MAs on both instance and image level. Conclusion: Our results suggest that it is important to benchmark new lossesagainst the CE and Focal loss functions, as we achieve similar or better results in our testusing these objectives.
Original languageEnglish
Title of host publicationProceedings of MIDL 2020
EditorsTal Arbel, Ismail Ben Ayed, Marleen de Bruijne, Maxime Descoteaux, Herve Lombaert, Christopher Pal
Volume121
Publication date2020
Pages19-32
Publication statusPublished - 2020
EventMedical Imaging with Deep Learning - Montreal, Canada
Duration: 6. Jul 20208. Jul 2020

Conference

ConferenceMedical Imaging with Deep Learning
CountryCanada
CityMontreal
Period06/07/202008/07/2020
SeriesProceedings of Machine Learning Research
Volume121
ISSN2640-3498

Fingerprint Dive into the research topics of 'Comparing Objective Functions for Segmentation and Detection of Microaneurysms in Retinal Images'. Together they form a unique fingerprint.

  • Cite this

    Andersen, J. K. H., Grauslund, J., & Savarimuthu, T. R. (2020). Comparing Objective Functions for Segmentation and Detection of Microaneurysms in Retinal Images. In T. Arbel, I. Ben Ayed, M. de Bruijne, M. Descoteaux, H. Lombaert, & C. Pal (Eds.), Proceedings of MIDL 2020 (Vol. 121, pp. 19-32). Proceedings of Machine Learning Research, Vol.. 121 http://proceedings.mlr.press/