IDEAS home Printed from https://ideas.repec.org/h/spr/sprchp/978-3-030-45529-3_7.html
   My bibliography  Save this book chapter

Domain Adaptation via Image to Image Translation

In: Domain Adaptation in Computer Vision with Deep Learning

Author

Listed:
  • Zak Murez

    (HRL Laboratories, LLC)

  • Soheil Kolouri

    (HRL Laboratories, LLC)

  • David Kriegman

    (University of California, Department of Computer Science & Engineering)

  • Ravi Ramamoorthi

    (University of California, Department of Computer Science & Engineering)

  • Kyungnam Kim

    (HRL Laboratories, LLC)

Abstract

Unsupervised Domain Adaptation (UDA) has recently attracted a lot of attention from the computer vision community. In this chapter, we review a general framework for UDA, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. UDA is a challenging problem, as it aims at overcoming the potentially large difference between source and target data distributions, which is known as the “domain gap.” Here we propose a general UDA algorithm by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-to-image translation framework to constrain the features extracted by the encoder network. We leverage three main ideas: (1) we require that the features extracted by encoders are able to reconstruct the images in both domains, hence, encoders provide pseudo-invertible nonlinear mappings, (2) we require that the distribution of features extracted from images in the two domains to be indistinguishable in the encoders’ output space (i.e., the latent space), (3) we require various cycle consistencies on source and target encoders and decoders. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.

Suggested Citation

  • Zak Murez & Soheil Kolouri & David Kriegman & Ravi Ramamoorthi & Kyungnam Kim, 2020. "Domain Adaptation via Image to Image Translation," Springer Books, in: Hemanth Venkateswara & Sethuraman Panchanathan (ed.), Domain Adaptation in Computer Vision with Deep Learning, chapter 0, pages 117-136, Springer.
  • Handle: RePEc:spr:sprchp:978-3-030-45529-3_7
    DOI: 10.1007/978-3-030-45529-3_7
    as

    Download full text from publisher

    To our knowledge, this item is not available for download. To find whether it is available, there are three options:
    1. Check below whether another version of this item is available online.
    2. Check on the provider's web page whether it is in fact available.
    3. Perform a
    for a similarly titled item that would be available.

    More about this item

    Keywords

    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:sprchp:978-3-030-45529-3_7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.