Generalization Bounds for Domain Adaptation

NeurIPS 2012  ·  Chao Zhang, Lei Zhang, Jieping Ye ·

In this paper, we provide a new framework to obtain the generalization bounds of the learning process for domain adaptation, and then apply the derived bounds to analyze the asymptotical convergence of the learning process. Without loss of generality, we consider two kinds of representative domain adaptation: one is with multiple sources and the other is combining source and target data. In particular, we use the integral probability metric to measure the difference between two domains. For either kind of domain adaptation, we develop a related Hoeffding-type deviation inequality and a symmetrization inequality to achieve the corresponding generalization bound based on the uniform entropy number. We also generalized the classical McDiarmid's inequality to a more general setting where independent random variables can take values from different domains. By using this inequality, we then obtain generalization bounds based on the Rademacher complexity. Afterwards, we analyze the asymptotic convergence and the rate of convergence of the learning process for such kind of domain adaptation. Meanwhile, we discuss the factors that affect the asymptotic behavior of the learning process and the numerical experiments support our theoretical findings as well.

PDF Abstract NeurIPS 2012 PDF NeurIPS 2012 Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here