no code implementations • 24 Dec 2024 • Chang Liu, Xin Ma, Xiaochen Yang, Yuxiang Zhang, Yanni Dong
In this paper, we propose a novel approach called the CrOss-Mamba interaction and Offset-guided fusion (COMO) framework for multimodal object detection tasks.
1 code implementation • 5 Nov 2024 • Yifan Wang, Xiaochen Yang, Fanqi Pu, Qingmin Liao, Wenming Yang
Specifically, EH-FAM employs multi-head attention with a global receptive field to extract semantic features for small-scale objects and leverages lightweight convolutional modules to efficiently aggregate visual features across different scales.
1 code implementation • 15 Oct 2024 • Qizhang Li, Xiaochen Yang, WangMeng Zuo, Yiwen Guo
Our method also achieves over 90% attack success rates against Llama-2-Chat models on AdvBench, despite their outstanding resistance to jailbreak attacks.
no code implementations • 19 Sep 2024 • Chenyu Wang, Shuo Yan, Yixuan Chen, Yujiang Wang, Mingzhi Dong, Xiaochen Yang, Dongsheng Li, Robert P. Dick, Qin Lv, Fan Yang, Tun Lu, Ning Gu, Li Shang
Our key discovery is that coarse-grained noises in earlier denoising steps have demonstrated high motion consistency across consecutive video frames.
no code implementations • CVPR 2024 • Zhenzhong Kuang, Xiaochen Yang, Yingjie Shen, Chao Hu, Jun Yu
On the other, we anonymize the visual clues (i. e. appearance and geometry structure) by distracting the extrinsic identity attention.
no code implementations • 31 May 2024 • Xiaoke Wang, Xiaochen Yang, Rui Zhu, Jing-Hao Xue
Positive-unlabeled (PU) learning aims to train a classifier using the data containing only labeled-positive instances and unlabeled instances.
no code implementations • NeurIPS 2023 • Yubin Shi, Yixuan Chen, Mingzhi Dong, Xiaochen Yang, Dongsheng Li, Yujiang Wang, Robert P. Dick, Qin Lv, Yingying Zhao, Fan Yang, Tun Lu, Ning Gu, Li Shang
To describe such modular-level learning capabilities, we introduce a novel concept dubbed modular neural tangent kernel (mNTK), and we demonstrate that the quality of a module's learning is tightly associated with its mNTK's principal eigenvalue $\lambda_{\max}$.
no code implementations • 21 Jul 2023 • Qizhang Li, Yiwen Guo, Xiaochen Yang, WangMeng Zuo, Hao Chen
Our ICLR work advocated for enhancing transferability in adversarial examples by incorporating a Bayesian formulation into model parameters, which effectively emulates the ensemble of infinitely many deep neural networks, while, in this paper, we introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters.
no code implementations • 27 Sep 2021 • Chenyu Wang, Zongyu Lin, Xiaochen Yang, Jiao Sun, Mingxuan Yue, Cyrus Shahabi
Based on the homophily assumption of GNN, we propose a homophily-aware constraint to regularize the optimization of the region graph so that neighboring region nodes on the learned graph share similar crime patterns, thus fitting the mechanism of diffusion convolution.
no code implementations • 17 May 2021 • Xiaoxu Li, Xiaochen Yang, Zhanyu Ma, Jing-Hao Xue
Few-shot image classification is a challenging problem that aims to achieve the human level of recognition based only on a small number of training images.
1 code implementation • NeurIPS 2020 • Mingzhi Dong, Xiaochen Yang, Rui Zhu, Yujiang Wang, Jing-Hao Xue
Metric learning aims to learn a distance measure that can benefit distance-based methods such as the nearest neighbour (NN) classifier.
no code implementations • 1 Jul 2020 • Xiaochen Yang, Jean Honorio
In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools.
1 code implementation • 27 Jun 2020 • Xiaoxu Li, Liyun Yu, Xiaochen Yang, Zhanyu Ma, Jing-Hao Xue, Jie Cao, Jun Guo
Despite achieving state-of-the-art performance, deep learning methods generally require a large amount of labeled data during training and may suffer from overfitting when the sample size is small.
1 code implementation • 10 Jun 2020 • Xiaochen Yang, Yiwen Guo, Mingzhi Dong, Jing-Hao Xue
Many existing methods consider maximizing or at least constraining a distance margin in the feature space that separates similar and dissimilar pairs of instances to guarantee their generalization ability.
no code implementations • 9 Feb 2018 • Mingzhi Dong, Yujiang Wang, Xiaochen Yang, Jing-Hao Xue
The performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to learn a suitable metric from the data.
no code implementations • 9 Feb 2018 • Mingzhi Dong, Xiaochen Yang, Yang Wu, Jing-Hao Xue
In this paper, we propose the Lipschitz margin ratio and a new metric learning framework for classification through maximizing the ratio.