1 code implementation • 28 Feb 2023 • Yu Zhang, Junle Yu, Xiaolin Huang, Wenhui Zhou, Ji Hou
Different from previous methods that only use geometry representation, our module is specifically designed to effectively correlate color into geometry for the point cloud registration task.
no code implementations • 23 Feb 2023 • Zhengbao He, Tao Li, Sizhe Chen, Xiaolin Huang
Based on self-fitting, we provide new insights into the existing methods to mitigate CO and extend CO to multi-step adversarial training.
1 code implementation • 22 Nov 2022 • Sizhe Chen, Geng Yuan, Xinwen Cheng, Yifan Gong, Minghai Qin, Yanzhi Wang, Xiaolin Huang
In this paper, we uncover them by model checkpoints' gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints' cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures.
1 code implementation • 21 Nov 2022 • Tao Li, Weihao Yan, Zehao Lei, Yingwen Wu, Kun Fang, Ming Yang, Xiaolin Huang
To fully uncover the great potential of deep neural networks (DNNs), various learning algorithms have been developed to improve the model's generalization ability.
1 code implementation • 20 Nov 2022 • Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Xiaolin Huang, Jie Yang
Randomized Smoothing (RS) is a promising technique for certified robustness, and recently in RS the ensemble of multiple deep neural networks (DNNs) has shown state-of-the-art performances.
no code implementations • 27 Sep 2022 • Zhixing Ye, Xinwen Cheng, Xiaolin Huang
Deep Neural Networks (DNNs) are susceptible to elaborately designed perturbations, whether such perturbations are dependent or independent of images.
no code implementations • 18 Sep 2022 • Mingzhen He, Fan He, Fanghui Liu, Xiaolin Huang
The theoretical foundation of RFFs is based on the Bochner theorem that relates symmetric, positive definite (PD) functions to probability measures.
no code implementations • 12 Aug 2022 • Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang
The wide application of deep neural networks (DNNs) demands an increasing amount of attention to their real-world robustness, i. e., whether a DNN resists black-box adversarial attacks, among them score-based query attacks (SQAs) are the most threatening ones because of their practicalities and effectiveness: the attackers only need dozens of queries on model outputs to seriously hurt a victim network.
no code implementations • 4 Jul 2022 • Weiyu Sun, Xinyu Zhang, Ying Chen, Yun Ge, Chunyu Ji, Xiaolin Huang
Heart rate measuring based on remote photoplethysmography (rPPG) plays an important role in health caring, which estimates heart rate from facial video in a non-contact, less-constrained way.
no code implementations • 18 Jun 2022 • Qinghua Tao, Li Li, Xiaolin Huang, Xiangming Xi, Shuning Wang, Johan A. K. Suykens
To apply PWLNN methods, both the representation and the learning have long been studied.
1 code implementation • 26 May 2022 • Tao Li, Zhehao Huang, Qinghua Tao, Yingwen Wu, Xiaolin Huang
Recently, an interesting attempt is stochastic weight averaging (SWA), which significantly improves the generalization by simply averaging the solutions at the tail stage of training.
1 code implementation • 24 May 2022 • Sizhe Chen, Zhehao Huang, Qinghua Tao, Yingwen Wu, Cihang Xie, Xiaolin Huang
The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores.
1 code implementation • 24 May 2022 • Shutong Wu, Sizhe Chen, Cihang Xie, Xiaolin Huang
Based on OPS, we introduce an unlearnable dataset called CIFAR-10-S, which is indistinguishable from CIFAR-10 by humans but induces the trained model to extremely low accuracy.
no code implementations • 13 Feb 2022 • Yixing Huang, Andreas Maier, Fuxin Fan, Björn Kreher, Xiaolin Huang, Rainer Fietkau, Christoph Bert, Florian Putz
The complementary view setting provides a practical way to identify perspectively deformed structures by assessing the deviation between the two views.
no code implementations • 3 Feb 2022 • Mingzhen He, Fan He, Lei Shi, Xiaolin Huang, Johan A. K. Suykens
Asymmetric kernels naturally exist in real life, e. g., for conditional probability and directed graphs.
1 code implementation • CVPR 2022 • Tao Li, Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang
Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust.
no code implementations • 30 Sep 2021 • Hengling Zhao, Yipeng Liu, Xiaolin Huang, Ce Zhu
Tucker decomposition, Tensor Train (TT) and Tensor Ring (TR) are common decomposition for low rank compression of deep neural networks.
1 code implementation • 15 Jul 2021 • Wei Liu, Pingping Zhang, Yinjie Lei, Xiaolin Huang, Jie Yang, Michael Ng
The effectiveness and superior performance of our approach are validated through comprehensive experiments in a range of applications.
no code implementations • 31 May 2021 • Tao Wang, Ruixin Zhang, Xingyu Chen, Kai Zhao, Xiaolin Huang, Yuge Huang, Shaoxin Li, Jilin Li, Feiyue Huang
Based on this observation, we propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
2 code implementations • 31 May 2021 • Sizhe Chen, Zhehao Huang, Qinghua Tao, Xiaolin Huang
Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial attacks, while the existing black-box attacks require extensive queries on the victim DNN to achieve high success rates.
no code implementations • 31 May 2021 • Zhixing Ye, Shaofei Qin, Sizhe Chen, Xiaolin Huang
As the name suggests, for a natural image, if we add the dominant pattern of a DNN to it, the output of this DNN is determined by the dominant pattern instead of the original image, i. e., DNN's prediction is the same with the dominant pattern's.
1 code implementation • 2 May 2021 • Jing Huang, Xiaolin Huang, Jie Yang
Hypergraphs are a generalized data structure of graphs to model higher-order correlations among entities, which have been successfully adopted into various research domains.
1 code implementation • 13 Apr 2021 • Qin Luo, Kun Fang, Jie Yang, Xiaolin Huang
Random Fourier Features (RFF) demonstrate wellappreciated performance in kernel approximation for largescale situations but restrict kernels to be stationary and positive definite.
1 code implementation • 22 Mar 2021 • Lei Tan, Shutong Wu, Xiaolin Huang
In this paper, we introduce the Weighted Neural Tangent Kernel (WNTK), a generalized and improved tool, which can capture an over-parameterized NN's training dynamics under different optimizers.
1 code implementation • 20 Mar 2021 • Tao Li, Lei Tan, Qinghua Tao, Yipeng Liu, Xiaolin Huang
Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that the DNNs could be trained in low-dimensional subspaces.
2 code implementations • 20 Feb 2021 • Sizhe Chen, Qinghua Tao, Zhixing Ye, Xiaolin Huang
Deep neural networks could be fooled by adversarial examples with trivial differences to original samples.
no code implementations • 10 Dec 2020 • Yulei Qin, Hao Zheng, Yun Gu, Xiaolin Huang, Jie Yang, Lihui Wang, Feng Yao, Yue-Min Zhu, Guang-Zhong Yang
Training convolutional neural networks (CNNs) for segmentation of pulmonary airway, artery, and vein is challenging due to sparse supervisory signals caused by the severe class imbalance between tubular targets and background.
no code implementations • 3 Nov 2020 • Fanghui Liu, Xiaolin Huang, Yudong Chen, Johan A. K. Suykens
In this paper, we develop a quadrature framework for large-scale kernel machines via a numerical integration representation.
2 code implementations • 23 Oct 2020 • Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Jia Cai, Feipeng Cai, Xiaolin Huang, Jie Yang
Despite of the efficiency in defending specific attacks, adversarial training is benefited from the data augmentation, which does not contribute to the robustness of DNN itself and usually suffers from accuracy drop on clean data as well as inefficiency in unknown attacks.
no code implementations • 22 Oct 2020 • Kexin Lv, Fan He, Xiaolin Huang, Jie Yang, Liming Chen
Nowadays, more and more datasets are stored in a distributed way for the sake of memory storage or data privacy.
no code implementations • 28 Sep 2020 • Kun Fang, Xiaolin Huang, Yingwen Wu, Tao Li, Jie Yang
To defend adversarial attacks, we design a block containing multiple paths to learn robust features and the parameters of these paths are required to be orthogonal with each other.
1 code implementation • 10 Sep 2020 • Kun Fang, Fanghui Liu, Xiaolin Huang, Jie Yang
In the second-stage process, a linear learner is conducted with respect to the mapped random features.
1 code implementation • 16 Aug 2020 • Sizhe Chen, Fan He, Xiaolin Huang, Kun Zhang
This paper focuses on high-transferable adversarial attacks on detectors, which are hard to attack in a black-box manner, because of their multiple-output characteristics and the diversity across architectures.
no code implementations • 1 Jun 2020 • Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A. K. Suykens
In this paper, we study the asymptotic properties of regularized least squares with indefinite kernels in reproducing kernel Krein spaces (RKKS).
no code implementations • 30 May 2020 • Fanghui Liu, Xiaolin Huang, Yingyi Chen, Johan A. K. Suykens
In this paper, we attempt to solve a long-lasting open question for non-positive definite (non-PD) kernels in machine learning community: can a given non-PD kernel be decomposed into the difference of two PD kernels (termed as positive decomposition)?
1 code implementation • 6 May 2020 • Fan He, Kexin Lv, Jie Yang, Xiaolin Huang
This letter proposes a one-shot algorithm for feature-distributed kernel PCA.
no code implementations • 23 Apr 2020 • Jia Cai, Kexin Lv, Junyi Huo, Xiaolin Huang, Jie Yang
To overcome this limitation, in this paper, we propose a sparse generalized canonical correlation analysis (GCCA), which could detect the latent relations of multiview data with sparse structures.
no code implementations • 23 Apr 2020 • Fanghui Liu, Xiaolin Huang, Yudong Chen, Johan A. K. Suykens
This survey may serve as a gentle introduction to this topic, and as a users' guide for practitioners interested in applying the representative algorithms and understanding theoretical results under various technical assumptions.
no code implementations • 28 Mar 2020 • Mingyi Zhou, Jing Wu, Yipeng Liu, Xiaolin Huang, Shuaicheng Liu, Xiang Zhang, Ce Zhu
Then, the adversarial examples generated by the imitation model are utilized to fool the attacked model.
no code implementations • 19 Mar 2020 • Tianyi Zhang, Yun Gu, Xiaolin Huang, Enmei Tu, Jie Yang
In particular, we incorporate a disparity-based constraint mechanism into the generation of SR images in a deep neural network framework with an additional atrous parallax-attention modules.
no code implementations • 4 Mar 2020 • Chengjin Sun, Sizhe Chen, Jia Cai, Xiaolin Huang
To implement the Type I attack, we destroy the original one by increasing the distance in input space while keeping the output similar because different inputs may correspond to similar features for the property of deep neural network.
no code implementations • 4 Mar 2020 • Chengjin Sun, Sizhe Chen, Xiaolin Huang
We restrict the gradient from the reconstruction image to the original one so that the autoencoder is not sensitive to trivial perturbation produced by the adversarial attack.
no code implementations • 21 Jan 2020 • Zhixing Ye, Sizhe Chen, Peidong Zhang, Chengjin Sun, Xiaolin Huang
Adversarial attacks have long been developed for revealing the vulnerability of Deep Neural Networks (DNNs) by adding imperceptible perturbations to the input.
no code implementations • 16 Jan 2020 • Sizhe Chen, Zhengbao He, Chengjin Sun, Jie Yang, Xiaolin Huang
AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss.
no code implementations • 29 Dec 2019 • Tianshu Chu, Qin Luo, Jie Yang, Xiaolin Huang
In addition, the results also demonstrate that the higher-precision bottom layers could boost the 1-bit network performance appreciably due to a better preservation of the original image information while the lower-precision posterior layers contribute to the regularization of $k-$bit networks.
1 code implementation • 16 Dec 2019 • Sizhe Chen, Xiaolin Huang, Zhengbao He, Chengjin Sun
Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence.
no code implementations • 20 Nov 2019 • Fanghui Liu, Xiaolin Huang, Yudong Chen, Jie Yang, Johan A. K. Suykens
In this paper, we propose a fast surrogate leverage weighted sampling strategy to generate refined random Fourier features for kernel approximation.
no code implementations • 7 Oct 2019 • Jiaxuan Xie, Fanghui Liu, Kaijie Wang, Xiaolin Huang
On small datasets (less than 1000 samples), for which deep learning is generally not suitable due to overfitting, our method achieves superior performance compared to advanced kernel methods.
no code implementations • 19 Aug 2019 • Yixing Huang, Alexander Preuhs, Guenter Lauritsch, Michael Manhart, Xiaolin Huang, Andreas Maier
Robustness of deep learning methods for limited angle tomography is challenged by two major factors: a) due to insufficient training data the network may not generalize well to unseen data; b) deep learning methods are sensitive to noise.
1 code implementation • 23 Jul 2019 • Wei Liu, Pingping Zhang, Yinjie Lei, Xiaolin Huang, Jie Yang, Ian Reid
In this paper, a non-convex non-smooth optimization framework is proposed to achieve diverse smoothing natures where even contradictive smoothing behaviors can be achieved.
no code implementations • 16 Jul 2019 • Yulei Qin, Mingjian Chen, Hao Zheng, Yun Gu, Mali Shen, Jie Yang, Xiaolin Huang, Yue-Min Zhu, Guang-Zhong Yang
Airway segmentation on CT scans is critical for pulmonary disease diagnosis and endobronchial navigation.
no code implementations • 15 Apr 2019 • Fanghui Liu, Chen Gong, Xiaolin Huang, Tao Zhou, Jie Yang, DaCheng Tao
In this paper, we propose a novel matching based tracker by investigating the relationship between template matching and the recent popular correlation filter based trackers (CFTs).
1 code implementation • 17 Feb 2019 • Sanli Tang, Fan He, Xiaolin Huang, Jie Yang
To train the deep model, a dataset is established, namely DeepPCB, which contains 1, 500 image pairs with annotations including positions of 6 common types of PCB defects.
1 code implementation • 13 Oct 2018 • Yulei Qin, Juan Wen, Hao Zheng, Xiaolin Huang, Jie Yang, Ning Song, Yue-Min Zhu, Lingqian Wu, Guang-Zhong Yang
To expedite the diagnosis, we present a novel method named Varifocal-Net for simultaneous classification of chromosome's type and polarity using deep convolutional networks.
no code implementations • 26 Sep 2018 • Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A. K. Suykens
This paper generalizes regularized regression problems in a hyper-reproducing kernel Hilbert space (hyper-RKHS), illustrates its utility for kernel learning and out-of-sample extensions, and proves asymptotic convergence results for the introduced regression models in an approximation theory view.
no code implementations • 3 Sep 2018 • Sanli Tang, Xiaolin Huang, Mingjian Chen, Chengjin Sun, Jie Yang
Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations.
no code implementations • 31 Aug 2018 • Fanghui Liu, Xiaolin Huang, Chen Gong, Jie Yang, Li Li
Learning this data-adaptive matrix in a formulation-free strategy enlarges the margin between classes and thus improves the model flexibility.
no code implementations • 19 Dec 2017 • Yixing Huang, Oliver Taubmann, Xiaolin Huang, Viktor Haase, Guenter Lauritsch, Andreas Maier
Hence, the main purpose of this paper is to reduce streak artifacts at various scales.
no code implementations • 6 Jul 2017 • Fanghui Liu, Xiaolin Huang, Chen Gong, Jie Yang, Johan A. K. Suykens
Since the concave-convex procedure has to solve a sub-problem in each iteration, we propose a concave-inexact-convex procedure (CCICP) algorithm with an inexact solving scheme to accelerate the solving process.
no code implementations • 4 Jun 2017 • Xiaolin Huang, Ming Yan
For several nonconvex penalties, including minimax concave penalty (MCP), $\ell_0$ norm, and sorted $\ell_1$ penalty, we provide fast algorithms for finding the analytical solutions by solving the dual problem.
2 code implementations • 19 Feb 2017 • Wei Xiao, Xiaolin Huang, Jorge Silva, Saba Emrani, Arin Chaudhuri
Robust PCA methods are typically batch algorithms which requires loading all observations into memory before processing.
no code implementations • 3 Jan 2017 • Xiaolin Huang, Yan Xia, Lei Shi, Yixing Huang, Ming Yan, Joachim Hornegger, Andreas Maier
Aiming at overexposure correction for computed tomography (CT) reconstruction, we in this paper propose a mixed one-bit compressive sensing (M1bit-CS) to acquire information from both regular and saturated measurements.
no code implementations • 14 May 2015 • Xiaolin Huang, Lei Shi, Ming Yan, Johan A. K. Suykens
The one-sided $\ell_1$ loss and the linear loss are two popular loss functions for 1bit-CS.