Comparing Objective Functions for Segmentation and Detection of Microaneurysms in Retinal Images

Publikation: Kapitel i bog/rapport/konference-proceedingKonferencebidrag i proceedingsForskningpeer review


Retinal microaneurysms (MAs) are the earliest signs of diabetic retinopathy (DR) whichis the leading cause of blindness among the working aged population in the western world.Detection of MAs present a particular challenge as MA pixels account for less than 0.5% ofthe retinal image. In deep neural networks the learning process can be adversely affectedby imbalance which introduces a bias towards the most well represented class. Recently, anumber of objective functions have been proposed as alternatives to the standard Crossentropy (CE) loss in efforts to combat this problem. In this work we investigate the influenceof the network objective during optimization by comparing Residual U-nets trained forsegmentation of MAs in retinal images using six different objective functions; weighted andunweighted CE, Dice loss, weighted and unweighted Focal loss and Focal Tversky loss. Wealso perform test with the CE objective using a more complex model. Three networks withdifferent seeds are trained for each objective function using optimized hyper-parametersettings on a dataset of 382 images with pixel level annotations for MAs. Instance levelMA detection performance is evaluated with the average free response receiver operatorcharacteristic (FROC) score calculated as the mean sensitivity at seven average false positives per image thresholds on 80 test images. The image level MA detection performanceand detection of low levels of DR is evaluated with bootstrapped AUC scores on the sameimages and a separate test set of 1287 images. Significance test for image level detectionaccuracy (α = 0.05) is performed using Cochran’s Q and McNemar’s test. Segmentationperformance is evaluated with the average pixel precision (AP) score. For instance leveldetection and pixel segmentation we perform repeated measures ANOVA with Post-Hoctests. Results: Losses based on the CE index perform significantly better than the Diceand Focal Tversky loss for instance level detection and pixel segmentation. The highestFROC score of 0.5448 (±0.0096) and AP of 0.4888 (±0.0196) is achieved using weightedCE. For all objectives excluding the Focal Tversky loss (AUC = 0.5) there is no significantdifference for image level detection accuracy on the 80 image test set. The highest AUC of0.993 (95% CI: 0.980 - 1.0) is achieved using the Focal loss. For detection of mild DR on theset of 1287 images there is a significant difference between model objectives (p = 2.87e−12).An AUC of 0.730 (95% CI: 0.683 - 0.745 is achieved using the complex model with CE.Using the Focal Tversky objective we fail to detect any MAs on both instance and image level. Conclusion: Our results suggest that it is important to benchmark new lossesagainst the CE and Focal loss functions, as we achieve similar or better results in our testusing these objectives.
TitelProceedings of MIDL 2020
RedaktørerTal Arbel, Ismail Ben Ayed, Marleen de Bruijne, Maxime Descoteaux, Herve Lombaert, Christopher Pal
StatusUdgivet - 2020
BegivenhedMedical Imaging with Deep Learning - Montreal, Canada
Varighed: 6. jul. 20208. jul. 2020


KonferenceMedical Imaging with Deep Learning
NavnProceedings of Machine Learning Research


  • deep learning
  • Medical Imaging
  • Ophthalmology
  • Diabetes complications
  • artificial intelligence

Fingeraftryk Dyk ned i forskningsemnerne om 'Comparing Objective Functions for Segmentation and Detection of Microaneurysms in Retinal Images'. Sammen danner de et unikt fingeraftryk.