New Approach Boosts Efficiency in Medical Image Segmentation with Active Learning and nnUNet

A new study presented at the Medical Imaging with Deep Learning (MIDL) conference explores how active learning can be combined with the self-configuring nnUNet architecture to reduce annotation effort in medical image segmentation tasks. The work, co-authored by Bernhard Föllmer, Kenrick Schulze, Christian Wald, Sebastian Stober, Wojciech Samek, and Marc Dewey, compares different sampling strategies and introduces a novel method called USIM – Uncertainty-Aware Submodular Mutual Information Measure.

The proposed approach selects annotation samples that are uncertain, diverse, and representative, improving training efficiency and reducing expert labeling costs. Experiments across three segmentation datasets show that nnUNet performs reliably in active learning setups, and that most informed sampling strategies, especially USIM, consistently outperform random selection.

Congratulations to Bernhard Föllmer on this innovative contribution to data-efficient AI in medical imaging and the advancement of scalable segmentation workflows.

The paper is available in the MIDL 2024 proceedings (pp. 480–503, PMLR).