Domain adaptation is the task of adapting models between domains.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We propose to using high level semantic and contextual features including segmentation and detection masks obtained by off-the-shelf state-of-the-art vision as observations and use deep network to learn the navigation policy.
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.
In this paper, we propose a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than our previous tuple-based end-to-end (TE2E) loss function.
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
SOTA for Domain Adaptation on ImageCLEF-DA
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.
Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting.
DOMAIN ADAPTATION MACHINE TRANSLATION NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING RELATION EXTRACTION SEMANTIC PARSING SEMANTIC ROLE LABELING SENTIMENT ANALYSIS TEXT CLASSIFICATION TRANSFER LEARNING
We study the problem of transferring a sample in one domain to an analog sample in another domain.
#2 best model for Unsupervised Image-To-Image Translation on SVNH-to-MNIST
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
#2 best model for Multimodal Unsupervised Image-To-Image Translation on Cats-and-Dogs