However, while acoustic models face similar challenges due to distribution shifts in test-time speech, TTA techniques specifically designed for acoustic modeling in the context of open-world data shifts remain scarce.
Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions.
To address this gap, we propose a notion edge balance to measure the proportion of edges connecting different demographic groups in clusters.
Many research efforts have been committed to unsupervised domain adaptation (DA) problems that transfer knowledge learned from a labeled source domain to an unlabeled target domain.
Generalized Zero-Shot Learning (GZSL) and Open-Set Recognition (OSR) are two mainstream settings that greatly extend conventional visual object recognition.
To fill this gap, we started with the simple graph convolution (SGC) model that operates on an attributed graph and formulated an influence function to approximate the changes in model parameters when a node or an edge is removed from an attributed graph.
Experimentally, we observe that CFC is highly robust to the proposed attack and is thus a truly robust fair clustering alternative.
In this paper, we consider a novel research problem: music-to-text synaesthesia.
Later, the trained encoder is frozen as a teacher model to distill a student model with a contrastive loss.
META consists of Positional Encoding, Transformer-based Autoencoder, and Multi-task Prediction to learn effective representations for both migration prediction and rating prediction.
In this paper, we revisit the concept of visual words and propose the Learnable Visual Words (LVW) to interpret the model prediction behaviors with two novel modules: semantic visual words learning and dual fidelity preservation.
Recently, contrastiveness-based augmentation surges a new climax in the computer vision domain, where some operations, including rotation, crop, and flip, combined with dedicated algorithms, dramatically increase the model generalization and robustness.
We consider the object recognition problem in autonomous driving using automotive radar sensors.
Ranked #1 on Multiple Object Tracking on RADIATE
With the fast development of algorithmic governance, fairness has become a compulsory property for machine learning models to suppress unintentional discrimination.
Partial Domain Adaptation (PDA) addresses the unsupervised domain adaptation problem where the target label space is a subset of the source label space.
To achieve this, we propose a machine learning approach to adapt the editorial style derived from few exemplars to a query code snippet.
Unsupervised feature selection aims to select a subset from the original features that are most useful for the downstream tasks without external guidance information.
By employing score-based outlier detectors for initialization, iPOF updates each data point's outlier score by averaging the outlier factors of its nearest common neighbors.
In this paper, we present a probabilistic ordinary differential equation (ODE), called STochastic boundaRy ODE (STRODE), that learns both the timings and the dynamics of time series data without requiring any timing annotations during training.
For downstream usage, we propose a novel modality-adaptive attention mechanism for multimodal feature fusion by adaptively emphasizing language and vision signals.
In this paper, we propose a novel framework to accurately identify the seen categories in target domain, and effectively recover the semantic attributes for unseen categories.
In this paper, we propose a Graph-Graph Similarity Network to tackle the graph classification problem by constructing a SuperGraph through learning the relationships among graphs.
Unsupervised domain adaptation targets to transfer task-related knowledge from labeled source domain to unlabeled target domain.
Consensus clustering fuses diverse basic partitions (i. e., clustering results obtained from conventional clustering methods) into an integrated one, which has attracted increasing attention in both academic and industrial areas due to its robust and effective performance.
To address this problem, we propose a simple yet effective method for improving stochastic gradient methods named predictive local smoothness (PLS).