Search Results for author: Jiawei Ge

Found 8 papers, 1 papers with code

Query-Based Knowledge Sharing for Open-Vocabulary Multi-Label Classification

no code implementations2 Jan 2024 Xuelin Zhu, Jian Liu, Dongqi Tang, Jiawei Ge, Weijia Liu, Bo Liu, Jiuxin Cao

Identifying labels that did not appear during training, known as multi-label zero-shot learning, is a non-trivial task in computer vision.

Knowledge Distillation Multi-Label Classification +1

Beyond Visual Cues: Synchronously Exploring Target-Centric Semantics for Vision-Language Tracking

no code implementations28 Nov 2023 Jiawei Ge, Xiangmei Chen, Jiuxin Cao, Xuelin Zhu, Bo Liu

However, current VL trackers have not fully exploited the power of VL learning, as they suffer from limitations such as heavily relying on off-the-shelf backbones for feature extraction, ineffective VL fusion designs, and the absence of VL-related loss functions.

Object Tracking Representation Learning

Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift

no code implementations27 Nov 2023 Jiawei Ge, Shange Tang, Jianqing Fan, Cong Ma, Chi Jin

This paper addresses this fundamental question by proving that, surprisingly, classical Maximum Likelihood Estimation (MLE) purely using source data (without any modification) achieves the minimax optimality for covariate shift under the well-specified setting.

regression Retrieval

UTOPIA: Universally Trainable Optimal Prediction Intervals Aggregation

no code implementations28 Jun 2023 Jianqing Fan, Jiawei Ge, Debarghya Mukherjee

Uncertainty quantification for prediction is an intriguing problem with significant applications in various fields, such as biomedical science, economic studies, and weather forecasts.

Prediction Intervals Uncertainty Quantification

On the Provable Advantage of Unsupervised Pretraining

no code implementations2 Mar 2023 Jiawei Ge, Shange Tang, Jianqing Fan, Chi Jin

Unsupervised pretraining, which learns a useful representation using a large amount of unlabeled data to facilitate the learning of downstream tasks, is a critical component of modern large-scale machine learning systems.

Contrastive Learning Representation Learning

Two-Stream Transformer for Multi-Label Image Classification

1 code implementation ACMMM 2022 Xuelin Zhu, Jiuxin Cao, Jiawei Ge, Weijia Liu, Bo Liu

Specifically, in each layer of TSFormer, a cross-modal attention module is developed to aggregate visual features from spatial stream into semantic stream and update label semantics via a residual connection.

Classification Multi-Label Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.