no code implementations • 31 Jan 2025 • Han Yu, Jiashuo Liu, Hao Zou, Renzhe Xu, Yue He, Xingxuan Zhang, Peng Cui
Then we develop Manifold Compactness based error Slice Discovery (MCSD), a novel algorithm that directly treats risk and coherence as the optimization objective, and is flexible to be applied to models of various tasks.
no code implementations • 22 Mar 2024 • Renzhe Xu, Haotian Wang, Xingxuan Zhang, Bo Li, Peng Cui
We introduce the Proportional Payoff Allocation Game (PPA-Game) to model how agents, akin to content creators on platforms like YouTube and TikTok, compete for divisible resources and consumers' attention.
no code implementations • 4 Mar 2024 • Han Yu, Jiashuo Liu, Xingxuan Zhang, Jiayun Wu, Peng Cui
In closing, we propose several promising directions for future research in OOD evaluation.
1 code implementation • 4 Mar 2024 • Jieren Deng, Haojian Zhang, Kun Ding, Jianhua Hu, Xingxuan Zhang, Yunkuan Wang
This paper presents Incremental Vision-Language Object Detection (IVLOD), a novel learning task designed to incrementally adapt pre-trained Vision-Language Object Detection Models (VLODMs) to various specialized domains, while simultaneously preserving their zero-shot generalization capabilities for the generalized domain.
no code implementations • 9 Feb 2024 • Xingxuan Zhang, Jiansheng Li, Wenjing Chu, Junjia Hai, Renzhe Xu, Yuqing Yang, Shikai Guan, Jiazheng Xu, Peng Cui
We investigate the generalization boundaries of current Multimodal Large Language Models (MLLMs) via comprehensive evaluation under out-of-distribution scenarios and domain-specific tasks.
2 code implementations • 28 Sep 2023 • Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans.
Ranked #3 on
Multi-Label Text Classification
on CC3M-TagMask
1 code implementation • 31 Aug 2023 • Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, Jingren Zhou
Large vision-language models (LVLMs) have recently witnessed rapid advancements, exhibiting a remarkable capacity for perceiving, understanding, and processing visual information by connecting visual receptor with large language models (LLMs).
no code implementations • ICCV 2023 • Xingxuan Zhang, Renzhe Xu, Han Yu, Yancheng Dong, Pengfei Tian, Peng Cu
However, we reveal that Adam is not necessarily the optimal choice for the majority of current DG methods and datasets.
1 code implementation • 30 May 2023 • Renzhe Xu, Haotian Wang, Xingxuan Zhang, Bo Li, Peng Cui
In reality, agents often have to learn and maximize the rewards of the resources at the same time.
no code implementations • 25 May 2023 • Zheyan Shen, Han Yu, Peng Cui, Jiashuo Liu, Xingxuan Zhang, Linjun Zhou, Furui Liu
Moreover, we propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.
1 code implementation • CVPR 2024 • Han Yu, Xingxuan Zhang, Renzhe Xu, Jiashuo Liu, Yue He, Peng Cui
This paper examines the risks of test data information leakage from two aspects of the current evaluation protocol: supervised pretraining on ImageNet and oracle model selection.
no code implementations • 21 May 2023 • Zimu Wang, Jiashuo Liu, Hao Zou, Xingxuan Zhang, Yue He, Dongxu Liang, Peng Cui
In this work, we focus on exploring two representative categories of heterogeneity in recommendation data that is the heterogeneity of prediction mechanism and covariate distribution and propose an algorithm that explores the heterogeneity through a bilevel clustering method.
1 code implementation • CVPR 2023 • Xingxuan Zhang, Renzhe Xu, Han Yu, Hao Zou, Peng Cui
Yet the current definition of flatness discussed in SAM and its follow-ups are limited to the zeroth-order flatness (i. e., the worst-case loss within a perturbation radius).
no code implementations • 2 Dec 2022 • Han Yu, Peng Cui, Yue He, Zheyan Shen, Yong Lin, Renzhe Xu, Xingxuan Zhang
The problem of covariate-shift generalization has attracted intensive research attention.
1 code implementation • 15 Oct 2022 • Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui
In this paper, we assume that each consumer can purchase multiple products at will.
2 code implementations • CVPR 2023 • Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, Peng Cui
Most current evaluation methods for domain generalization (DG) adopt the leave-one-out strategy as a compromise on the limited number of domains.
no code implementations • 27 Mar 2022 • Xingxuan Zhang, Zekai Xu, Renzhe Xu, Jiashuo Liu, Peng Cui, Weitao Wan, Chong Sun, Chen Li
Despite the striking performance achieved by modern detectors when training and test data are sampled from the same or similar distribution, the generalization ability of detectors under unknown distribution shifts remains hardly studied.
1 code implementation • 9 Feb 2022 • Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu
Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors.
1 code implementation • 3 Nov 2021 • Renzhe Xu, Xingxuan Zhang, Zheyan Shen, Tong Zhang, Peng Cui
Afterward, we prove that under ideal conditions, independence-driven importance weighting algorithms could identify the variables in this set.
no code implementations • 31 Aug 2021 • Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui
This paper represents the first comprehensive, systematic review of OOD generalization, encompassing a spectrum of aspects from problem definition, methodological development, and evaluation procedures, to the implications and future directions of the field.
Out-of-Distribution Generalization
Representation Learning
+1
no code implementations • CVPR 2022 • Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, Haoxin Liu
Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains.
2 code implementations • CVPR 2021 • Xingxuan Zhang, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He, Zheyan Shen
Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise.
Ranked #35 on
Domain Generalization
on VLCS
no code implementations • 1 Jan 2021 • Xingxuan Zhang, Peng Cui, Renzhe Xu, Yue He, Linjun Zhou, Zheyan Shen
We propose to address this problem by removing the dependencies between features via reweighting training samples, which results in a more balanced distribution and helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between features and labels.
no code implementations • 2 Dec 2020 • Won-Dong Jang, Donglai Wei, Xingxuan Zhang, Brian Leahy, Helen Yang, James Tompkin, Dalit Ben-Yosef, Daniel Needleman, Hanspeter Pfister
To alleviate the problem, we propose to classify input features into intermediate shape codes and recover complete object shapes from them.
no code implementations • ICCV 2019 • Xingxuan Zhang, Feng Cheng, Shilin Wang
Current state-of-the-art approaches for lip reading are based on sequence-to-sequence architectures that are designed for natural machine translation and audio speech recognition.
Ranked #22 on
Lipreading
on LRS2
(using extra training data)