Recent progress in network-based audio event classification has shown the benefit of pre-training models on visual data such as ImageNet.
We show both theoretically and experimentally, the VAE ensemble objective encourages the linear transformations connecting the VAEs to be trivial transformations, aligning the latent representations of different models to be "alike".
In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications.
We propose a measure to compute class similarity in large-scale classification based on prediction scores.
Unsupervised domain adaptation studies the problem of utilizing a relevant source domain with abundant labels to build predictive modeling for an unannotated target domain.
Ranked #22 on Domain Generalization on PACS
Recent advancements in unsupervised disentangled representation learning focus on extending the variational autoencoder (VAE) with an augmented objective function to balance the trade-off between disentanglement and reconstruction.
In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones.
Ranked #10 on Text Summarization on GigaWord
We enhance an existing incremental PCA method in several ways to ensure its usability for visualizing streaming multidimensional data.
We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data.