no code implementations • 13 Aug 2020 • Yichen Li, Xingchao Peng
Deep networks have been used to learn transferable representations for domain adaptation.
1 code implementation • ECCV 2020 • Xingchao Peng, Yichen Li, Kate Saenko
Extensive experiments are conducted to demonstrate the power of our new datasets in benchmarking state-of-the-art multi-source domain adaptation methods, as well as the advantage of our proposed model.
no code implementations • 10 Dec 2019 • Yichen Li, Xingchao Peng
Secondly, we propose the Prototypical Adversarial Domain Adaptation (PADA) model which utilizes unlabeled bridge domains to align feature distribution between source and target with a large discrepancy.
no code implementations • ICLR 2020 • Xingchao Peng, Zijun Huang, Yizhe Zhu, Kate Saenko
In this work, we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node.
no code implementations • 23 Oct 2019 • Shuhan Tan, Xingchao Peng, Kate Saenko
Unsupervised domain adaptation is a promising way to generalize deep models to novel domains.
no code implementations • 25 Sep 2019 • Shuhan Tan, Xingchao Peng, Kate Saenko
In this paper, we explore the task of Generalized Domain Adaptation (GDA): How to transfer knowledge across different domains in the presence of both covariate and label shift?
1 code implementation • 28 Apr 2019 • Xingchao Peng, Zijun Huang, Ximeng Sun, Kate Saenko
Unsupervised model transfer has the potential to greatly improve the generalizability of deep models to novel domains.
Ranked #4 on Multi-target Domain Adaptation on DomainNet
3 code implementations • ICCV 2019 • Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, Bo wang
Conventional unsupervised domain adaptation (UDA) assumes that training data are sampled from a single domain.
no code implementations • 27 Jul 2018 • Ulrich Viereck, Xingchao Peng, Kate Saenko, Robert Platt
This paper proposes an approach to domain transfer based on a pairwise loss function that helps transfer control policies learned in simulation onto a real robot.
no code implementations • 26 Jun 2018 • Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, Kate Saenko
In this paper, we present a new large-scale benchmark called Syn2Real, which consists of a synthetic domain rendered from 3D object models and two real-image domains containing the same object categories.
2 code implementations • 18 Oct 2017 • Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, Kate Saenko
We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains.
no code implementations • 19 Jan 2017 • Xingchao Peng, Kate Saenko
Experimentally, we show training off-the-shelf classifiers on the newly generated data can significantly boost performance when testing on the real image domains (PASCAL VOC 2007 benchmark and Office dataset), improving upon several existing methods.
no code implementations • 14 Sep 2016 • Xingchao Peng, Kate Saenko
We present a novel approach to object classification and detection which requires minimal supervision and which combines visual texture cues and shape information learned from freely available unlabeled web search results.
no code implementations • 21 May 2016 • Xingchao Peng, Judy Hoffman, Stella X. Yu, Kate Saenko
We address the difficult problem of distinguishing fine-grained object categories in low resolution images.
no code implementations • 9 Apr 2015 • Xingchao Peng, Baochen Sun, Karim Ali, Kate Saenko
Deep convolutional neural networks learn extremely powerful image representations, yet most of that power is hidden in the millions of deep-layer parameters.
no code implementations • ICCV 2015 • Xingchao Peng, Baochen Sun, Karim Ali, Kate Saenko
Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain.