In recent years the applications of machine learning models have increased rapidly, due to the large amount of available data and technological progress. While some domains like web analysis can benefit from this with only minor restrictions, other fields like in medicine with patient data are strongerregulated.
We addressed the first issue by the alignment of transferable spectral properties within an adversarial model to balance the focus between the easily transferable features and the necessary discriminatory features, while at the same time limiting the learning of domain-specific semantics by relevance considerations.
Ranked #9 on Domain Adaptation on Office-31
As an information preserving alternative, we propose a complex-valued vector embedding of proximity data.
Transfer learning is focused on the reuse of supervised learning models in a new context.
The amount of real-time communication between agents in an information system has increased rapidly since the beginning of the decade.
The presented approach finds a target subspace representation for source and target data to address domain differences by orthogonal basis transfer.
Current supervised learning models cannot generalize well across domain boundaries, which is a known problem in many applications, such as robotics or visual classification.
Indefinite similarity measures can be frequently found in bio-informatics by means of alignment scores, but are also common in other fields like shape measures in image retrieval.