Search Results for author: Xuanqing Liu

Found 22 papers, 10 papers with code

Stochastic Optimization for Non-convex Problem with Inexact Hessian Matrix, Gradient, and Function

no code implementations18 Oct 2023 Liu Liu, Xuanqing Liu, Cho-Jui Hsieh, DaCheng Tao

In this paper, we explore a family of stochastic TR and ARC methods that can simultaneously provide inexact computations of the Hessian matrix, gradient, and function values.

Second-order methods Stochastic Optimization

FastEnsemble: Benchmarking and Accelerating Ensemble-based Uncertainty Estimation for Image-to-Image Translation

no code implementations29 Sep 2021 Xuanqing Liu, Sara Imboden, Marie Payne, Neil Lin, Cho-Jui Hsieh

In addition, we introduce FastEnsemble, a fast ensemble method which only requires less than $8\%$ of the full-ensemble training time to generate a new ensemble member.

Benchmarking Image-to-Image Translation +2

Label Disentanglement in Partition-based Extreme Multilabel Classification

no code implementations NeurIPS 2021 Xuanqing Liu, Wei-Cheng Chang, Hsiang-Fu Yu, Cho-Jui Hsieh, Inderjit S. Dhillon

Partition-based methods are increasingly-used in extreme multi-label classification (XMC) problems due to their scalability to large output spaces (e. g., millions or more).

Classification Disentanglement +1

How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

no code implementations19 Oct 2020 Yuanhao Xiong, Xuanqing Liu, Li-Cheng Lan, Yang You, Si Si, Cho-Jui Hsieh

For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy.

Benchmarking Graph Mining

Improving the Speed and Quality of GAN by Adversarial Training

1 code implementation7 Aug 2020 Jiachen Zhong, Xuanqing Liu, Cho-Jui Hsieh

Generative adversarial networks (GAN) have shown remarkable results in image generation tasks.

Image Generation

Provably Robust Metric Learning

2 code implementations NeurIPS 2020 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Yuan Jiang, Cho-Jui Hsieh

Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied.

Metric Learning

How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework

no code implementations CVPR 2020 Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh

In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE), which naturally incorporates various commonly used regularization mechanisms based on random noise injection.

Learning to Encode Position for Transformer with Continuous Dynamical Model

1 code implementation ICML 2020 Xuanqing Liu, Hsiang-Fu Yu, Inderjit Dhillon, Cho-Jui Hsieh

The main reason is that position information among input units is not inherently encoded, i. e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input.

Inductive Bias Linguistic Acceptability +4

Gradient Boosting Neural Networks: GrowNet

1 code implementation19 Feb 2020 Sarkhan Badirli, Xuanqing Liu, Zhengming Xing, Avradeep Bhowmik, Khoa Doan, Sathiya S. Keerthi

A novel gradient boosting framework is proposed where shallow neural networks are employed as ``weak learners''.

Learning-To-Rank regression

GraphDefense: Towards Robust Graph Convolutional Networks

1 code implementation11 Nov 2019 Xiaoyun Wang, Xuanqing Liu, Cho-Jui Hsieh

Inspired by the previous works on adversarial defense for deep neural networks, and especially adversarial training algorithm, we propose a method called GraphDefense to defend against the adversarial perturbations.

Adversarial Defense

Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective

1 code implementation10 Jun 2019 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh

Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.

valid

Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise

1 code implementation5 Jun 2019 Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh

In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE) network, which naturally incorporates various commonly used regularization mechanisms based on random noise injection.

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

6 code implementations KDD 2019 Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, Cho-Jui Hsieh

Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99. 36 on the PPI dataset, while the previous best result was 98. 71 by [16].

Clustering Computational Efficiency +4

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

1 code implementation ICLR 2019 Xuanqing Liu, Yao Li, Chongruo wu, Cho-Jui Hsieh

Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way.

Adversarial Defense

Stochastic Second-order Methods for Non-convex Optimization with Inexact Hessian and Gradient

no code implementations26 Sep 2018 Liu Liu, Xuanqing Liu, Cho-Jui Hsieh, DaCheng Tao

Trust region and cubic regularization methods have demonstrated good performance in small scale non-convex optimization, showing the ability to escape from saddle points.

Second-order methods

Fast Variance Reduction Method with Stochastic Batch Size

no code implementations ICML 2018 Xuanqing Liu, Cho-Jui Hsieh

In this paper we study a family of variance reduction methods with randomized batch size---at each step, the algorithm first randomly chooses the batch size and then selects a batch of samples to conduct a variance-reduced stochastic update.

Rob-GAN: Generator, Discriminator, and Adversarial Attacker

2 code implementations CVPR 2019 Xuanqing Liu, Cho-Jui Hsieh

Adversarial training is the technique used to improve the robustness of discriminator by combining adversarial attacker and discriminator in the training phase.

Adversarial Attack Generative Adversarial Network +1

Better Generalization by Efficient Trust Region Method

no code implementations ICLR 2018 Xuanqing Liu, Jason D. Lee, Cho-Jui Hsieh

Solving this subproblem is non-trivial---existing methods have only sub-linear convergence rate.

Towards Robust Neural Networks via Random Self-ensemble

no code implementations ECCV 2018 Xuanqing Liu, Minhao Cheng, huan zhang, Cho-Jui Hsieh

In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {\bf randomness} and {\bf ensemble}.

An inexact subsampled proximal Newton-type method for large-scale machine learning

no code implementations28 Aug 2017 Xuanqing Liu, Cho-Jui Hsieh, Jason D. Lee, Yuekai Sun

We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an $\epsilon$-suboptimal point in $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa d})\log(\frac{1}{\epsilon}))$ FLOPS, where $n$ is number of samples, $d$ is feature dimension, and $\kappa$ is the condition number.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.