no code implementations • 16 Aug 2024 • Miaoge Li, Jingcai Guo, Richard Yi Da Xu, Dongsheng Wang, Xiaofeng Cao, Song Guo
Compositional Zero-Shot Learning (CZSL) aims to recognize novel \textit{state-object} compositions by leveraging the shared knowledge of their primitive components.
1 code implementation • 27 May 2024 • YuXiao Lee, Xiaofeng Cao, Jingcai Guo, Wei Ye, Qing Guo, Yi Chang
The remarkable achievements of Large Language Models (LLMs) have captivated the attention of both academia and industry, transcending their initial role in dialogue generation.
1 code implementation • 9 May 2024 • Shuhao Tang, Hao Tian, Xiaofeng Cao, Wei Ye
Typical R-convolution graph kernels invoke the kernel functions that decompose graphs into non-isomorphic substructures and compare them.
1 code implementation • 25 Apr 2024 • Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao
Zero-shot learning has consistently yielded remarkable progress via modeling nuanced one-to-one visual-attribute correlation.
1 code implementation • 8 Feb 2024 • Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei
In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.
no code implementations • 6 Feb 2024 • Bohao Qu, Xiaofeng Cao, Qing Guo, Yi Chang, Ivor W. Tsang, Chengqi Zhang
In this study, we present a transductive inference approach on that reward information propagation graph, which enables the effective estimation of rewards for unlabelled data in offline reinforcement learning.
1 code implementation • 2 Feb 2024 • Yangyang Shu, Xiaofeng Cao, Qi Chen, BoWen Zhang, Ziqin Zhou, Anton Van Den Hengel, Lingqiao Liu
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
1 code implementation • 17 Jan 2024 • Feiyang Ye, Baijiong Lin, Xiaofeng Cao, Yu Zhang, Ivor Tsang
In this paper, we study the Multi-Objective Bi-Level Optimization (MOBLO) problem, where the upper-level subproblem is a multi-objective optimization problem and the lower-level subproblem is for scalar optimization.
1 code implementation • NeurIPS 2023 • Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
In MINT, the teacher aims to instruct multiple learners, with each learner focusing on learning a scalar-valued target model.
no code implementations • 10 Nov 2023 • Mingwei Xu, Xiaofeng Cao, Ivor W. Tsang, James T. Kwok
In this paper, we replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
1 code implementation • 25 Oct 2023 • Cong Wang, Xiaofeng Cao, Lanzhe Guo2, Zenglin Shi
In this paper, we propose a novel SSL method called DualMatch, in which the class prediction jointly invokes feature embedding in a dual-level interaction manner.
1 code implementation • 18 Oct 2023 • Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo
The underlying rationale behind our idea is that image resampling can alleviate the influence of adversarial perturbations while preserving essential semantic information, thereby conferring an inherent advantage in defending against adversarial attacks.
1 code implementation • 5 Jun 2023 • Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the teacher provides examples to the learner iteratively such that the learner can achieve fast convergence to a target model.
no code implementations • 28 Feb 2023 • Bohao Qu, Xiaofeng Cao, Jielong Yang, Hechang Chen, Chang Yi, Ivor W. Tsang, Yew-Soon Ong
To resolve this problem, this paper tries to learn the diverse policies from the history of state-action pairs under a non-Markovian environment, in which a policy dispersion scheme is designed for seeking diverse policy representation.
no code implementations • 13 Dec 2022 • Chen Zhang, Xiaofeng Cao, Yi Chang, Ivor W Tsang
Then, relying on the surjective mapping from the teaching set to the parameter, we develop a design strategy of the optimal teaching set under appropriate settings, of which two popular efficiency metrics, teaching dimension and iterative teaching dimension are one.
no code implementations • 29 Jul 2022 • Xiaofeng Cao, Weixin Bu, Shengjun Huang, MinLing Zhang, Ivor W. Tsang, Yew Soon Ong, James T. Kwok
In future, learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans.
no code implementations • 30 Jun 2022 • Xiaofeng Cao, Weiyang Liu, Ivor W. Tsang
Finally, we demonstrate the empirical performance of MHEAL in a wide range of applications on data-efficient learning, including deep clustering, distribution matching, version space sampling and deep active learning.
no code implementations • 30 Jun 2022 • Xiaofeng Cao, Yaming Guo, Ivor W. Tsang, James T. Kwok
An inherent assumption is that this learning manner can derive those updates into the optimal hypothesis.
no code implementations • 16 Mar 2022 • Bike Chen, Wei Peng, Xiaofeng Cao, Juha Röning
Semantic segmentation (SS) aims to classify each pixel into one of the pre-defined classes.
no code implementations • 6 May 2021 • Xiaofeng Cao, Ivor W. Tsang
This optimization solver is in general ineffective when the student learner does not disclose any cue of the learning parameters.
no code implementations • 6 May 2021 • Xiaofeng Cao, Ivor W. Tsang
We present geometric Bayesian active learning by disagreements (GBALD), a framework that performs BALD on its core-set construction interacting with model uncertainty estimation.
no code implementations • 1 Jan 2021 • Xiaofeng Cao, Ivor Tsang
To guarantee the improvements, our generalization analysis proves that, compared to typical Bayesian spherical interpretation, geodesic search with ellipsoid can derive a tighter lower error bound and achieve higher probability to obtain a nearly zero error.
no code implementations • 20 May 2019 • Xiaofeng Xu, Ivor W. Tsang, Xiaofeng Cao, Ruiheng Zhang, Chuancai Liu
In most of existing attribute-based research, class-specific attributes (CSA), which are class-level annotations, are usually adopted due to its low annotation cost for each class instead of each individual image.
no code implementations • 28 Sep 2018 • Xiaofeng Cao, Ivor W. Tsang, Xiaofeng Xu, Guandong Xu
By discovering the connections between hypothesis and input distribution, we map the volume of version space into the number density and propose a target-independent distribution-splitting strategy with the following advantages: 1) provide theoretical guarantees on reducing label complexity and error rate as volume-splitting; 2) break the curse of initial hypothesis; 3) provide model guidance for a target-independent AL algorithm in real AL tasks.
no code implementations • 24 Jul 2018 • Xiaofeng Cao, Ivor W. Tsang, Guandong Xu
In this paper, we approximate the version space to a structured {hypersphere} that covers most of the hypotheses, and then divide the available AL sampling approaches into two kinds of strategies: Outer Volume Sampling and Inner Volume Sampling.
1 code implementation • CVPR 2018 • Zenglin Shi, Le Zhang, Yun Liu, Xiaofeng Cao, Yangdong Ye, Ming-Ming Cheng, Guoyan Zheng
Deep convolutional networks (ConvNets) have achieved unprecedented performances on many computer vision tasks.
Ranked #9 on Crowd Counting on WorldExpo’10
no code implementations • 31 May 2018 • Xiaofeng Cao
With the advantages of cluster boundary points in the above two properties, we propose a Geometric Active Learning (GAL) algorithm by knight's tour.