Search Results for author: Qinghua Tao

Found 22 papers, 12 papers with code

Sparsity via Sparse Group $k$-max Regularization

no code implementations13 Feb 2024 Qinghua Tao, Xiangming Xi, Jun Xu, Johan A. K. Suykens

For the linear inverse problem with sparsity constraints, the $l_0$ regularized problem is NP-hard, and existing approaches either utilize greedy algorithms to find almost-optimal solutions or to approximate the $l_0$ regularization with its convex counterparts.

Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes

no code implementations2 Feb 2024 Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens

In this work, we propose Kernel-Eigen Pair Sparse Variational Gaussian Processes (KEP-SVGP) for building uncertainty-aware self-attention where the asymmetry of attention kernels is tackled by Kernel SVD (KSVD) and a reduced complexity is acquired.

Gaussian Processes Variational Inference

Revisiting Deep Ensemble for Out-of-Distribution Detection: A Loss Landscape Perspective

1 code implementation22 Oct 2023 Kun Fang, Qinghua Tao, Xiaolin Huang, Jie Yang

Motivated by such diversities on OoD loss landscape across modes, we revisit the deep ensemble method for OoD detection through mode ensemble, leading to improved performance and benefiting the OoD detector with reduced variances.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs

1 code implementation30 Aug 2023 Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Xiaolin Huang, Johan A. K. Suykens

In contrast to previous MTL frameworks, our decision function in the dual induces a weighted kernel function with a task-coupling term characterized by the similarities of the task-specific factors, better revealing the explicit relations across tasks in MTL.

Nonlinear SVD with Asymmetric Kernels: feature learning and asymmetric Nyström method

no code implementations12 Jun 2023 Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

We describe a nonlinear extension of the matrix Singular Value Decomposition through asymmetric kernels, namely KSVD.

Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation

1 code implementation NeurIPS 2023 Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens

To the best of our knowledge, this is the first work that provides a primal-dual representation for the asymmetric kernel in self-attention and successfully applies it to modeling and optimization.

D4RL Long-range modeling +2

Tensorized LSSVMs for Multitask Regression

no code implementations4 Mar 2023 Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Johan A. K. Suykens

Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement.

regression

Deep Kernel Principal Component Analysis for Multi-level Feature Learning

1 code implementation22 Feb 2023 Francesco Tonin, Qinghua Tao, Panagiotis Patrinos, Johan A. K. Suykens

Principal Component Analysis (PCA) and its nonlinear extension Kernel PCA (KPCA) are widely used across science and industry for data analysis and dimensionality reduction.

Dimensionality Reduction

On Multi-head Ensemble of Smoothed Classifiers for Certified Robustness

1 code implementation20 Nov 2022 Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Xiaolin Huang, Jie Yang

Randomized Smoothing (RS) is a promising technique for certified robustness, and recently in RS the ensemble of multiple deep neural networks (DNNs) has shown state-of-the-art performances.

Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer

1 code implementation25 Jul 2022 Yingyi Chen, Xi Shen, Yahui Liu, Qinghua Tao, Johan A. K. Suykens

In this paper, we explore solving jigsaw puzzle as a self-supervised auxiliary loss in ViT for image classification, named Jigsaw-ViT.

Classification Domain Generalization +2

Tensor-based Multi-view Spectral Clustering via Shared Latent Space

1 code implementation23 Jul 2022 Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

In our method, the dual variables, playing the role of hidden features, are shared by all views to construct a common latent space, coupling the views by learning projections from view-specific spaces.

Clustering

Piecewise Linear Neural Networks and Deep Learning

no code implementations18 Jun 2022 Qinghua Tao, Li Li, Xiaolin Huang, Xiangming Xi, Shuning Wang, Johan A. K. Suykens

To apply PWLNN methods, both the representation and the learning have long been studied.

Trainable Weight Averaging: A General Approach for Subspace Training

1 code implementation26 May 2022 Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin

Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.

Dimensionality Reduction Efficient Neural Network +3

Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

1 code implementation24 May 2022 Sizhe Chen, Zhehao Huang, Qinghua Tao, Yingwen Wu, Cihang Xie, Xiaolin Huang

The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores.

Adversarial Attack

Query Attack by Multi-Identity Surrogates

2 code implementations31 May 2021 Sizhe Chen, Zhehao Huang, Qinghua Tao, Xiaolin Huang

Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial attacks, while the existing black-box attacks require extensive queries on the victim DNN to achieve high success rates.

Distributed Cooperative Driving in Multi-Intersection Road Networks

no code implementations21 Apr 2021 Huaxin Pei, Yi Zhang, Qinghua Tao, Shuo Feng, Li Li

Cooperative driving at isolated intersections attracted great interest and had been well discussed in recent years.

Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces

1 code implementation20 Mar 2021 Tao Li, Lei Tan, Qinghua Tao, Yipeng Liu, Xiaolin Huang

Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that the DNNs could be trained in low-dimensional subspaces.

Dimensionality Reduction

Measuring the Transferability of $\ell_\infty$ Attacks by the $\ell_2$ Norm

no code implementations20 Feb 2021 Sizhe Chen, Qinghua Tao, Zhixing Ye, Xiaolin Huang

Deep neural networks could be fooled by adversarial examples with trivial differences to original samples.

Towards Robust Neural Networks via Orthogonal Diversity

2 code implementations23 Oct 2020 Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Jia Cai, Feipeng Cai, Xiaolin Huang, Jie Yang

In this way, the proposed DIO augments the model and enhances the robustness of DNN itself as the learned features can be corrected by these mutually-orthogonal paths.

Adversarial Robustness Data Augmentation

Efficient hinging hyperplanes neural network and its application in nonlinear system identification

no code implementations15 May 2019 Jun Xu, Qinghua Tao, Zhen Li, Xiangming Xi, Johan A. K. Suykens, Shuning Wang

It is proved that for every EHH neural network, there is an equivalent adaptive hinging hyperplanes (AHH) tree, which was also proposed based on the model of HH and find good applications in system identification.

regression Variable Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.