OASIS: Only Adversarial Supervision for Semantic Image Synthesis
Despite their recent successes, generative adversarial networks (GANs) for semantic image synthesis still suffer from poor image quality when trained with only adversarial supervision. Previously, additionally employing the VGG-based perceptual loss has helped to overcome this issue, significantly improving the synthesis quality, but at the same time limited the progress of GAN models for semantic image synthesis. In this work, we propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results. We re-design the discriminator as a semantic segmentation network, directly using the given semantic label maps as the ground truth for training. By providing stronger supervision to the discriminator as well as to the generator through spatially- and semantically-aware discriminator feedback, we are able to synthesize images of higher fidelity and with a better alignment to their input label maps, making the use of the perceptual loss superfluous. Furthermore, we enable high-quality multi-modal image synthesis through global and local sampling of a 3D noise tensor injected into the generator, which allows complete or partial image editing. We show that images synthesized by our model are more diverse and follow the color and texture distributions of real images more closely. We achieve a strong improvement in image synthesis quality over prior state-of-the-art models across the commonly used ADE20K, Cityscapes, and COCO-Stuff datasets using only adversarial supervision. In addition, we investigate semantic image synthesis under severe class imbalance and sparse annotations, which are common aspects in practical applications but were overlooked in prior works. To this end, we evaluate our model on LVIS, a dataset originally introduced for long-tailed object recognition. We thereby demonstrate high performance of our model in the sparse and unbalanced data regimes, achieved by means of the proposed 3D noise and the ability of our discriminator to balance class contributions directly in the loss function. Our code and pretrained models are available at https://github.com/boschresearch/OASIS.
- Published in:
International Journal of Computer Vision - Type:
Article - Authors:
Sushko, Vadim; Schönfeld, Edgar; Zhang, Dan; Gall, Jürgen; Schiele, Bernt; Khoreva, Anna - Year:
2022
Citation information
Sushko, Vadim; Schönfeld, Edgar; Zhang, Dan; Gall, Jürgen; Schiele, Bernt; Khoreva, Anna: OASIS: Only Adversarial Supervision for Semantic Image Synthesis, International Journal of Computer Vision, 2022, 130, 2903--2923, https://link.springer.com/article/10.1007/s11263-022-01673-x, Sushko.etal.2022a,
@Article{Sushko.etal.2022a,
author={Sushko, Vadim; Schönfeld, Edgar; Zhang, Dan; Gall, Jürgen; Schiele, Bernt; Khoreva, Anna},
title={OASIS: Only Adversarial Supervision for Semantic Image Synthesis},
journal={International Journal of Computer Vision},
volume={130},
pages={2903--2923},
url={https://link.springer.com/article/10.1007/s11263-022-01673-x},
year={2022},
abstract={Despite their recent successes, generative adversarial networks (GANs) for semantic image synthesis still suffer from poor image quality when trained with only adversarial supervision. Previously, additionally employing the VGG-based perceptual loss has helped to overcome this issue, significantly improving the synthesis quality, but at the same time limited the progress of GAN models for...}}