Unsupervised domain adaptation studies the problem of utilizing a relevant source domain with abundant labels to build predictive modeling for an unannotated target domain.
We also investigate the impact of dense connections on the extraction process that encourage feature reuse and better gradient flow.
State-of-the-art speaker diarization systems utilize knowledge from external data, in the form of a pre-trained distance metric, to effectively determine relative speaker identities to unseen data.
Though deep network embeddings, e. g. DeepWalk, are widely adopted for community discovery, we argue that feature learning with random node attributes, using graph neural networks, can be more effective.
Inferencing with network data necessitates the mapping of its nodes into a vector space, where the relationships are preserved.
In automatic speech processing systems, speaker diarization is a crucial front-end component to separate segments from different speakers.
To this end, we develop the DKMO (Deep Kernel Machine Optimization) framework, that creates an ensemble of dense embeddings using Nystrom kernel approximations and utilizes deep learning to generate task-specific representations through the fusion of the embeddings.
With widespread adoption of electronic health records, there is an increased emphasis for predictive models that can effectively deal with clinical time-series data.
Kernel fusion is a popular and effective approach for combining multiple features that characterize different aspects of data.