no code implementations • 13 Feb 2024 • Qinghua Tao, Xiangming Xi, Jun Xu, Johan A. K. Suykens
For the linear inverse problem with sparsity constraints, the $l_0$ regularized problem is NP-hard, and existing approaches either utilize greedy algorithms to find almost-optimal solutions or to approximate the $l_0$ regularization with its convex counterparts.
no code implementations • 5 Feb 2024 • Kun Fang, Qinghua Tao, Kexin Lv, Mingzhen He, Xiaolin Huang, Jie Yang
Out-of-Distribution (OoD) detection is vital for the reliability of Deep Neural Networks (DNNs).
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 2 Feb 2024 • Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens
In this work, we propose Kernel-Eigen Pair Sparse Variational Gaussian Processes (KEP-SVGP) for building uncertainty-aware self-attention where the asymmetry of attention kernels is tackled by Kernel SVD (KSVD) and a reduced complexity is acquired.
1 code implementation • 22 Oct 2023 • Kun Fang, Qinghua Tao, Xiaolin Huang, Jie Yang
Motivated by such diversities on OoD loss landscape across modes, we revisit the deep ensemble method for OoD detection through mode ensemble, leading to improved performance and benefiting the OoD detector with reduced variances.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 30 Aug 2023 • Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Xiaolin Huang, Johan A. K. Suykens
In contrast to previous MTL frameworks, our decision function in the dual induces a weighted kernel function with a task-coupling term characterized by the similarities of the task-specific factors, better revealing the explicit relations across tasks in MTL.
no code implementations • 12 Jun 2023 • Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens
We describe a nonlinear extension of the matrix Singular Value Decomposition through asymmetric kernels, namely KSVD.
1 code implementation • NeurIPS 2023 • Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens
To the best of our knowledge, this is the first work that provides a primal-dual representation for the asymmetric kernel in self-attention and successfully applies it to modeling and optimization.
Ranked #2 on Offline RL on D4RL
no code implementations • 4 Mar 2023 • Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Johan A. K. Suykens
Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement.
1 code implementation • 22 Feb 2023 • Francesco Tonin, Qinghua Tao, Panagiotis Patrinos, Johan A. K. Suykens
Principal Component Analysis (PCA) and its nonlinear extension Kernel PCA (KPCA) are widely used across science and industry for data analysis and dimensionality reduction.
1 code implementation • 20 Nov 2022 • Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Xiaolin Huang, Jie Yang
Randomized Smoothing (RS) is a promising technique for certified robustness, and recently in RS the ensemble of multiple deep neural networks (DNNs) has shown state-of-the-art performances.
1 code implementation • 25 Jul 2022 • Yingyi Chen, Xi Shen, Yahui Liu, Qinghua Tao, Johan A. K. Suykens
In this paper, we explore solving jigsaw puzzle as a self-supervised auxiliary loss in ViT for image classification, named Jigsaw-ViT.
Ranked #1 on Learning with noisy labels on ANIMAL
1 code implementation • 23 Jul 2022 • Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens
In our method, the dual variables, playing the role of hidden features, are shared by all views to construct a common latent space, coupling the views by learning projections from view-specific spaces.
no code implementations • 18 Jun 2022 • Qinghua Tao, Li Li, Xiaolin Huang, Xiangming Xi, Shuning Wang, Johan A. K. Suykens
To apply PWLNN methods, both the representation and the learning have long been studied.
1 code implementation • 26 May 2022 • Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin
Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.
1 code implementation • 24 May 2022 • Sizhe Chen, Zhehao Huang, Qinghua Tao, Yingwen Wu, Cihang Xie, Xiaolin Huang
The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores.
2 code implementations • 31 May 2021 • Sizhe Chen, Zhehao Huang, Qinghua Tao, Xiaolin Huang
Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial attacks, while the existing black-box attacks require extensive queries on the victim DNN to achieve high success rates.
no code implementations • 21 Apr 2021 • Huaxin Pei, Yi Zhang, Qinghua Tao, Shuo Feng, Li Li
Cooperative driving at isolated intersections attracted great interest and had been well discussed in recent years.
1 code implementation • 20 Mar 2021 • Tao Li, Lei Tan, Qinghua Tao, Yipeng Liu, Xiaolin Huang
Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that the DNNs could be trained in low-dimensional subspaces.
no code implementations • 20 Feb 2021 • Sizhe Chen, Qinghua Tao, Zhixing Ye, Xiaolin Huang
Deep neural networks could be fooled by adversarial examples with trivial differences to original samples.
2 code implementations • 23 Oct 2020 • Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Jia Cai, Feipeng Cai, Xiaolin Huang, Jie Yang
In this way, the proposed DIO augments the model and enhances the robustness of DNN itself as the learned features can be corrected by these mutually-orthogonal paths.
no code implementations • 30 Sep 2019 • Yusen Huo, Qinghua Tao, Jianming Hu
In the proposed model, a multi-task learning structure is used to get the cooperative policy by learning.
no code implementations • 15 May 2019 • Jun Xu, Qinghua Tao, Zhen Li, Xiangming Xi, Johan A. K. Suykens, Shuning Wang
It is proved that for every EHH neural network, there is an equivalent adaptive hinging hyperplanes (AHH) tree, which was also proposed based on the model of HH and find good applications in system identification.