Deep Adversarial Networks for Biomedical Image Segmentation Utilizing Unannotated Images

  • Authors:
    Yizhe Zhang (Univ. of Notre Dame), Lin Yang (Univ. of Notre Dame), Jianxu Chen (Univ. of Notre Dame), Danny Chen (Univ. of Notre Dame), Maridel Frederickson (Penn State), David Hughes (Penn State)
    Publication ID:
    P090920
    Publication Type:
    Paper
    Received Date:
    17-May-2017
    Last Edit Date:
    19-Jun-2017
    Research:
    2698.005 (University of Notre Dame)

Abstract

Semantic segmentation is a fundamental problem in biomedical image analysis. In biomedical practice, it is common situations that only limited annotated data are available for model training. Unannotated images, on the other hand, are easier to acquire. How to utilize unannotated images for training effective segmentation models is an important issue. In this paper, we propose a new deep adversarial network(DAN) model for biomedical image segmentation, aiming to attain consistently good segmentation results on both annotated and unannotated images. Our model consists of two networks: (1) a segmentation network (SN) to conduct segmentation; (2) an evaluation network (EN) to assess segmentation quality. During training, EN is encouraged to distinguish between segmentation results of unannotated images and annotated ones (by giving them different scores), while SN is encouraged to produce segmentation results of unannotated images such that EN cannot distinguish these from the annotated ones. Through an iterative adversarial training process, because EN is constantly "criticizing" the segmentation results of unannotated images, SN can be trained to produce more and more accurate segmentation for unannotated and unseen samples. Experiments show that our proposed DAN model is effective in utilizing unannotated image data to obtain considerably better segmentation.

4819 Emperor Blvd, Suite 300 Durham, NC 27703 Voice: (919) 941-9400 Fax: (919) 941-9450