Search Results for author: Sijia Liu

Found 146 papers, 59 papers with code

Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks

no code implementations ICML 2020 Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Mingyi Hong, Una-May O'Reilly

In this paper, we study the problem of constrained min-max optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.

Robust Mixture-of-Expert Training for Convolutional Neural Networks

1 code implementation19 Aug 2023 Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, huan zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu

Since the lack of robustness has become one of the main hurdles for CNNs, in this paper we ask: How to adversarially robustify a CNN-based MoE model?

Adversarial Robustness

Tensor-Compressed Back-Propagation-Free Training for (Physics-Informed) Neural Networks

no code implementations18 Aug 2023 Yequan Zhao, Xinling Yu, Zhixiong Chen, Ziyue Liu, Sijia Liu, Zheng Zhang

Backward propagation (BP) is widely used to compute the gradients in neural network training.

AutoSeqRec: Autoencoder for Efficient Sequential Recommendation

1 code implementation14 Aug 2023 Sijia Liu, Jiahao Liu, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Ning Gu

Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users.

Collaborative Filtering Sequential Recommendation

An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning

no code implementations1 Aug 2023 Yihua Zhang, Prashant Khanduri, Ioannis Tsaknakis, Yuguang Yao, Mingyi Hong, Sijia Liu

Overall, we hope that this article can serve to accelerate the adoption of BLO as a generic tool to model, analyze, and innovate on a wide array of emerging SP and ML applications.

Certified Robustness for Large Language Models with Self-Denoising

no code implementations14 Jul 2023 Zhen Zhang, Guanhua Zhang, Bairu Hou, Wenqi Fan, Qing Li, Sijia Liu, Yang Zhang, Shiyu Chang

This largely falls into the study of certified robust LLMs, i. e., all predictions of LLM are certified to be correct in a local region around the input.


Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations

1 code implementation8 Jul 2023 Tong Steven Sun, Yuyang Gao, Shubham Khaladkar, Sijia Liu, Liang Zhao, Young-Ho Kim, Sungsoo Ray Hong

To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations.

Explainable Artificial Intelligence (XAI)

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

1 code implementation7 Jun 2023 Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen

In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction.

Model Sparsity Can Simplify Machine Unlearning

1 code implementation11 Apr 2023 Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner, closing the approximation gap, while continuing to be efficient.

Transfer Learning

A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion

1 code implementation29 Mar 2023 Haomin Zhuang, Yihua Zhang, Sijia Liu

In this work, we study the problem of adversarial attack generation for Stable Diffusion and ask if an adversarial text prompt can be obtained even in the absence of end-to-end model queries.

Adversarial Robustness Adversarial Text

Fairness Improves Learning from Noisily Labeled Long-Tailed Data

no code implementations22 Mar 2023 Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu

Both long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.


Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified $\ell_p$ Attacks

1 code implementation17 Mar 2023 Ren Wang, YuXuan Li, Sijia Liu

Adversarial robustness is a key concept in measuring the ability of neural networks to defend against adversarial attacks during the inference phase.

Adversarial Defense Adversarial Robustness

SMUG: Towards robust MRI reconstruction by smoothed unrolling

1 code implementation14 Mar 2023 Hui Li, Jinghan Jia, Shijun Liang, Yuguang Yao, Saiprasad Ravishankar, Sijia Liu

To address this problem, we propose a novel image reconstruction framework, termed SMOOTHED UNROLLING (SMUG), which advances a deep unrolling-based MRI reconstruction model using a randomized smoothing (RS)-based robust learning operation.

Adversarial Defense Image Classification +2

Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

1 code implementation13 Mar 2023 Yuguang Yao, Jiancheng Liu, Yifan Gong, Xiaoming Liu, Yanzhi Wang, Xue Lin, Sijia Liu

We call this 'model parsing of adversarial attacks' - a task to uncover 'arcana' in terms of the concealed VM information in attacks.

Adversarial Attack

Text-Visual Prompting for Efficient 2D Temporal Video Grounding

no code implementations CVPR 2023 Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding

In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video.

Video Grounding Visual Prompting

Robustness-preserving Lifelong Learning via Dataset Condensation

no code implementations7 Mar 2023 Jinghan Jia, Yihua Zhang, Dogyoon Song, Sijia Liu, Alfred Hero

Most work in this learning paradigm has focused on resolving the problem of 'catastrophic forgetting,' which refers to a notorious dilemma between improving model accuracy over new data and retaining accuracy over previous data.

Adversarial Robustness Dataset Condensation +1

What Is Missing in IRM Training and Evaluation? Challenges and Solutions

no code implementations4 Mar 2023 Yihua Zhang, Pranay Sharma, Parikshit Ram, Mingyi Hong, Kush Varshney, Sijia Liu

We propose a new IRM variant to address this limitation based on a novel viewpoint of ensemble IRM games as consensus-constrained bi-level optimization.

Out-of-Distribution Generalization

A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity

no code implementations12 Feb 2023 Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen

Based on a data model characterizing both label-relevant and label-irrelevant tokens, this paper provides the first theoretical analysis of training a shallow ViT, i. e., one self-attention layer followed by a two-layer perceptron, for a classification task.

Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks

no code implementations6 Feb 2023 Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, Miao Liu

Due to the significant computational challenge of training large-scale graph neural networks (GNNs), various sparse learning techniques have been exploited to reduce memory and storage costs.

Sparse Learning

Certified Interpretability Robustness for Class Activation Mapping

no code implementations26 Jan 2023 Alex Gu, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

Interpreting machine learning models is challenging but crucial for ensuring the safety of deep networks in autonomous driving systems.

Autonomous Driving

Towards Understanding How Self-training Tolerates Data Backdoor Poisoning

no code implementations20 Jan 2023 Soumyadeep Pal, Ren Wang, Yuguang Yao, Sijia Liu

In this paper, we explore the potential of self-training via additional unlabeled data for mitigating backdoor attacks.

backdoor defense Representation Learning

Adaptively Integrated Knowledge Distillation and Prediction Uncertainty for Continual Learning

no code implementations18 Jan 2023 Kanghao Chen, Sijia Liu, Ruixuan Wang, Wei-Shi Zheng

The first one is to adaptively integrate multiple levels of old knowledge and transfer it to each block level in the new model.

Continual Learning Knowledge Distillation

DialGuide: Aligning Dialogue Model Behavior with Developer Guidelines

1 code implementation20 Dec 2022 Prakhar Gupta, Yang Liu, Di Jin, Behnam Hedayatnia, Spandana Gella, Sijia Liu, Patrick Lange, Julia Hirschberg, Dilek Hakkani-Tur

These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent.

Response Generation

TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization

1 code implementation19 Dec 2022 Bairu Hou, Jinghan Jia, Yihua Zhang, Guanhua Zhang, Yang Zhang, Sijia Liu, Shiyu Chang

Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP).

Adversarial Defense Adversarial Robustness +1

Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation Constrained Optimization

no code implementations19 Dec 2022 Zichong Li, Pin-Yu Chen, Sijia Liu, Songtao Lu, Yangyang Xu

In this paper, we design and analyze stochastic inexact augmented Lagrangian methods (Stoc-iALM) to solve problems involving a nonconvex composite (i. e. smooth+nonsmooth) objective and nonconvex smooth functional constraints.


CLAWSAT: Towards Both Robust and Accurate Code Models

1 code implementation21 Nov 2022 Jinghan Jia, Shashank Srikant, Tamara Mitrovska, Chuang Gan, Shiyu Chang, Sijia Liu, Una-May O'Reilly

We integrate contrastive learning (CL) with adversarial learning to co-optimize the robustness and accuracy of code models.

Code Generation Code Summarization +2

Understanding and Improving Visual Prompting: A Label-Mapping Perspective

1 code implementation CVPR 2023 Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, Sijia Liu

As highlighted below, we show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target tasks, our method outperforms baselines by a substantial margin, e. g., 7. 9% and 6. 7% accuracy improvements in transfer learning to the target Flowers102 and CIFAR100 datasets.

Transfer Learning Visual Prompting

On the Robustness of deep learning-based MRI Reconstruction to image transformations

no code implementations9 Nov 2022 Jinghan Jia, Mingyi Hong, Yimeng Zhang, Mehmet Akçakaya, Sijia Liu

We find a new instability source of MRI image reconstruction, i. e., the lack of reconstruction robustness against spatial transformations of an input, e. g., rotation and cutout.

Image Classification MRI Reconstruction

Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices

no code implementations16 Oct 2022 Yimeng Zhang, Akshay Karkal Kamath, Qiucheng Wu, Zhiwen Fan, Wuyang Chen, Zhangyang Wang, Shiyu Chang, Sijia Liu, Cong Hao

In this paper, we propose a data-model-hardware tri-design framework for high-throughput, low-cost, and high-accuracy multi-object tracking (MOT) on High-Definition (HD) video stream.

Model Compression Multi-Object Tracking

Visual Prompting for Adversarial Robustness

2 code implementations12 Oct 2022 Aochuan Chen, Peter Lorenz, Yuguang Yao, Pin-Yu Chen, Sijia Liu

In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at testing time.

Adversarial Defense Adversarial Robustness +1

Advancing Model Pruning via Bi-level Optimization

1 code implementation8 Oct 2022 Yihua Zhang, Yuguang Yao, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, Sijia Liu

To reduce the computation overhead, various efficient 'one-shot' pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP.

Fairness Reprogramming

1 code implementation21 Sep 2022 Guanhua Zhang, Yihua Zhang, Yang Zhang, Wenqi Fan, Qing Li, Sijia Liu, Shiyu Chang

Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger, which is tuned towards the fairness criteria under a min-max formulation.


Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System

no code implementations9 Sep 2022 Xin Li, Yao Qiang, Chengyin Li, Sijia Liu, Dongxiao Zhu

We hypothesize that adversarial training can eliminate shortcut features whereas saliency guided training can filter out non-relevant features; both are nuisance features accounting for the performance degradation on OOD test sets.

Improving Bot Response Contradiction Detection via Utterance Rewriting

1 code implementation SIGDIAL (ACL) 2022 Di Jin, Sijia Liu, Yang Liu, Dilek Hakkani-Tur

Previous work has treated contradiction detection in bot responses as a task similar to natural language inference, e. g., detect the contradiction between a pair of bot utterances.

Natural Language Inference

Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling

no code implementations7 Jul 2022 Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graph-structured data.

Node Classification

Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning

no code implementations15 Jun 2022 Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang

Inspired by the recent success of learning robust models with unlabeled data, we explore a new robustness-aware CIL setting, where the learned adversarial robustness has to resist forgetting and be transferred as new tasks come in continually.

Adversarial Robustness class-incremental learning +2

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

1 code implementation15 Jun 2022 Tianlong Chen, huan zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang

Certifiable robustness is a highly desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios, but often demands tedious computations to establish.

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

2 code implementations13 Jun 2022 Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu

Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.

Distributed Optimization

Data-Efficient Double-Win Lottery Tickets from Robust Pre-training

1 code implementation9 Jun 2022 Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang

For example, on downstream CIFAR-10/100 datasets, we identify double-win matching subnetworks with the standard, fast adversarial, and adversarial pre-training from ImageNet, at 89. 26%/73. 79%, 89. 26%/79. 03%, and 91. 41%/83. 22% sparsity, respectively.

Transfer Learning

Zeroth-Order SciML: Non-intrusive Integration of Scientific Software with Deep Learning

no code implementations4 Jun 2022 Ioannis Tsaknakis, Bhavya Kailkhura, Sijia Liu, Donald Loveland, James Diffenderfer, Anna Maria Hiszpanski, Mingyi Hong

Existing knowledge integration approaches are limited to using differentiable knowledge source to be compatible with first-order DL training paradigm.

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

1 code implementation CVPR 2022 Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang

Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a particular trigger.

Network Pruning

A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Predictions

1 code implementation NAACL 2022 Yong Xie, Dakuo Wang, Pin-Yu Chen, JinJun Xiong, Sijia Liu, Sanmi Koyejo

More and more investors and machine learning models rely on social media (e. g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements.

Adversarial Attack Combinatorial Optimization +1

CryoRL: Reinforcement Learning Enables Efficient Cryo-EM Data Collection

no code implementations15 Apr 2022 Quanfu Fan, Yilai Li, Yuguang Yao, John Cohn, Sijia Liu, Seychelle M. Vos, Michael A. Cianfrocco

Single-particle cryo-electron microscopy (cryo-EM) has become one of the mainstream structural biology techniques because of its ability to determine high-resolution structures of dynamic bio-molecules.

reinforcement-learning Reinforcement Learning (RL)

Proactive Image Manipulation Detection

1 code implementation CVPR 2022 Vishal Asnani, Xi Yin, Tal Hassner, Sijia Liu, Xiaoming Liu

That is, a template protected real image, and its manipulated version, is better discriminated compared to the original real image vs. its manipulated one.

Image Manipulation Image Manipulation Detection

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

1 code implementation ICLR 2022 Yimeng Zhang, Yuguang Yao, Jinghan Jia, JinFeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu

To tackle this problem, we next propose to prepend an autoencoder (AE) to a given (black-box) model so that DS can be trained using variance-reduced ZO optimization.

Adversarial Robustness Image Classification +1

Reverse Engineering of Imperceptible Adversarial Image Perturbations

2 code implementations ICLR 2022 Yifan Gong, Yuguang Yao, Yize Li, Yimeng Zhang, Xiaoming Liu, Xue Lin, Sijia Liu

However, carefully crafted, tiny adversarial perturbations are difficult to recover by optimizing a unilateral RED objective.

Data Augmentation Image Denoising

Optimizer Amalgamation

1 code implementation ICLR 2022 Tianshu Huang, Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang

Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners.

Holistic Adversarial Robustness of Deep Learning Models

no code implementations15 Feb 2022 Pin-Yu Chen, Sijia Liu

Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.

Adversarial Robustness

How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations21 Jan 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

To Supervise or Not: How to Effectively Learn Wireless Interference Management Models?

no code implementations28 Dec 2021 Bingqing Song, Haoran Sun, Wenqiang Pu, Sijia Liu, Mingyi Hong

We then provide a series of theoretical results to further understand the properties of the two approaches.


Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization

2 code implementations23 Dec 2021 Yihua Zhang, Guanhua Zhang, Prashant Khanduri, Mingyi Hong, Shiyu Chang, Sijia Liu

We first show that the commonly-used Fast-AT is equivalent to using a stochastic gradient algorithm to solve a linearized BLO problem involving a sign operation.

Adversarial Defense

Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration

no code implementations22 Nov 2021 Yifan Gong, Geng Yuan, Zheng Zhan, Wei Niu, Zhengang Li, Pu Zhao, Yuxuan Cai, Sijia Liu, Bin Ren, Xue Lin, Xulong Tang, Yanzhi Wang

Weight pruning is an effective model compression technique to tackle the challenges of achieving real-time deep neural network (DNN) inference on mobile devices.

Model Compression

RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions

no code implementations ICCV 2021 Sung-En Chang, Yanyu Li, Mengshu Sun, Weiwen Jiang, Sijia Liu, Yanzhi Wang, Xue Lin

Specifically, this is the first effort to assign mixed quantization schemes and multiple precisions within layers -- among rows of the DNN weight matrix, for simplified operations in hardware inference, while preserving accuracy.

Image Classification Quantization

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

1 code implementation NeurIPS 2021 Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin

Systematical evaluation on accuracy, training speed, and memory footprint are conducted, where the proposed MEST framework consistently outperforms representative SOTA works.

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

no code implementations12 Oct 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.

Generating Realistic Physical Adversarial Examplesby Patch Transformer Network

no code implementations29 Sep 2021 Quanfu Fan, Kaidi Xu, Chun-Fu Chen, Sijia Liu, Gaoyuan Zhang, David Daniel Cox, Xue Lin

Physical adversarial attacks apply carefully crafted adversarial perturbations onto real objects to maliciously alter the prediction of object classifiers or detectors.

Tactics on Refining Decision Boundary for Improving Certification-based Robust Training

no code implementations29 Sep 2021 Wang Zhang, Lam M. Nguyen, Subhro Das, Pin-Yu Chen, Sijia Liu, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng

In verification-based robust training, existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation.

How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations ICLR 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD

1 code implementation15 Sep 2021 Chen Fan, Parikshit Ram, Sijia Liu

The key enabling technique is to interpret MAML as a bilevel optimization (BLO) problem and leverage the sign-based SGD(signSGD) as a lower-level optimizer of BLO.

Bilevel Optimization Few-Shot Image Classification +1

Certifiably Robust Interpretation via Renyi Differential Privacy

no code implementations4 Jul 2021 Ao Liu, Xiaoyu Chen, Sijia Liu, Lirong Xia, Chuang Gan

The advantages of our Renyi-Robust-Smooth (RDP-based interpretation method) are three-folds.

Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

2 code implementations NeurIPS 2021 Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang

Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis.

ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

1 code implementation27 Jun 2021 Ren Wang, Tianqi Chen, Philip Yao, Sijia Liu, Indika Rajapakse, Alfred Hero

K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability.

A Compression-Compilation Framework for On-mobile Real-time BERT Applications

no code implementations30 May 2021 Wei Niu, Zhenglun Kong, Geng Yuan, Weiwen Jiang, Jiexiong Guan, Caiwen Ding, Pu Zhao, Sijia Liu, Bin Ren, Yanzhi Wang

In this paper, we propose a compression-compilation co-design framework that can guarantee the identified model to meet both resource and real-time specifications of mobile devices.

Question Answering Text Generation

Preserving Earlier Knowledge in Continual Learning with the Help of All Previous Feature Extractors

no code implementations28 Apr 2021 Zhuoyun Li, Changhong Zhong, Sijia Liu, Ruixuan Wang, Wei-Shi Zheng

In order to reduce the forgetting of particularly earlier learned old knowledge and improve the overall continual learning performance, we propose a simple yet effective fusion mechanism by including all the previously learned feature extractors into the intelligent model.

Continual Learning

Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation

no code implementations25 Mar 2021 Yi Sun, Abel Valente, Sijia Liu, Dakuo Wang

Prior works on formalizing explanations of a graph neural network (GNN) focus on a single use case - to preserve the prediction results through identifying important edges and nodes.

Image Classification

Adversarial Examples can be Effective Data Augmentation for Unsupervised Machine Learning

1 code implementation2 Mar 2021 Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu

In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.

BIG-bench Machine Learning Contrastive Learning +2

On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations

no code implementations25 Feb 2021 Chi Zhang, Jinghan Jia, Burhaneddin Yaman, Steen Moeller, Sijia Liu, Mingyi Hong, Mehmet Akçakaya

Although deep learning (DL) has received much attention in accelerated MRI, recent studies suggest small perturbations may lead to instabilities in DL-based reconstructions, leading to concern for their clinical application.

MRI Reconstruction

On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning

1 code implementation ICLR 2021 Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang

Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.

Adversarial Attack Adversarial Robustness +3

Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?

no code implementations19 Feb 2021 Ning Liu, Geng Yuan, Zhengping Che, Xuan Shen, Xiaolong Ma, Qing Jin, Jian Ren, Jian Tang, Sijia Liu, Yanzhi Wang

In deep model compression, the recent finding "Lottery Ticket Hypothesis" (LTH) (Frankle & Carbin, 2018) pointed out that there could exist a winning ticket (i. e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance than the original dense network.

Model Compression

Fast Training of Provably Robust Neural Networks by SingleProp

no code implementations1 Feb 2021 Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Luca Daniel

Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees.

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks

no code implementations NeurIPS 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.

Robust Overfitting may be mitigated by properly learned smoothening

no code implementations ICLR 2021 Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang

A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements.

Knowledge Distillation

Self-Progressing Robust Training

1 code implementation22 Dec 2020 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems.

Adversarial Robustness

Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework

no code implementations21 Dec 2020 Pranay Sharma, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Xue Lin, Pramod K. Varshney

In this work, we focus on the study of stochastic zeroth-order (ZO) optimization which does not require first-order gradient information and uses only function evaluations.

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models

1 code implementation CVPR 2021 Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

We extend the scope of LTH and question whether matching subnetworks still exist in pre-trained computer vision models, that enjoy the same downstream transfer performance.

Training Stronger Baselines for Learning to Optimize

1 code implementation NeurIPS 2020 Tianlong Chen, Weiyi Zhang, Jingyang Zhou, Shiyu Chang, Sijia Liu, Lisa Amini, Zhangyang Wang

Learning to optimize (L2O) has gained increasing attention since classical optimizers require laborious problem-specific design and hyperparameter tuning.

Imitation Learning Rolling Shutter Correction

Higher-Order Certification for Randomized Smoothing

no code implementations NeurIPS 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

We also provide a framework that generalizes the calculation for certification using higher-order information.

TimeAutoML: Autonomous Representation Learning for Multivariate Irregularly Sampled Time Series

no code implementations4 Oct 2020 Yang Jiao, Kai Yang, Shaoyu Dou, Pan Luo, Sijia Liu, Dongjin Song

To this end, we propose an autonomous representation learning approach for multivariate time series (TimeAutoML) with irregular sampling rates and variable lengths.

Anomaly Detection Clustering +4

Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations

no code implementations29 Sep 2020 Pu Zhao, Parikshit Ram, Songtao Lu, Yuguang Yao, Djallel Bouneffouf, Xue Lin, Sijia Liu

The resulting scheme for meta-learning a UAP generator (i) has better performance (50% higher ASR) than baselines such as Projected Gradient Descent, (ii) has better performance (37% faster) than the vanilla L2O and MAML frameworks (when applicable), and (iii) is able to simultaneously handle UAP generation for different victim models and image data sources.

Adversarial Attack Bilevel Optimization +1

Real-Time Execution of Large-scale Language Models on Mobile

no code implementations15 Sep 2020 Wei Niu, Zhenglun Kong, Geng Yuan, Weiwen Jiang, Jiexiong Guan, Caiwen Ding, Pu Zhao, Sijia Liu, Bin Ren, Yanzhi Wang

Our framework can guarantee the identified model to meet both resource and real-time specifications of mobile devices, thus achieving real-time execution of large transformer-based models like BERT variants.


Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

1 code implementation ECCV 2020 Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, JinJun Xiong, Meng Wang

When the training data are maliciously tampered, the predictions of the acquired deep neural network (DNN) can be manipulated by an adversary known as the Trojan attack (or poisoning backdoor attack).

Backdoor Attack

RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices

no code implementations20 Jul 2020 Wei Niu, Mengshu Sun, Zhengang Li, Jou-An Chen, Jiexiong Guan, Xipeng Shen, Yanzhi Wang, Sijia Liu, Xue Lin, Bin Ren

The vanilla sparsity removes whole kernel groups, while KGS sparsity is a more fine-grained structured sparsity that enjoys higher flexibility while exploiting full on-device parallelism.

Code Generation Model Compression

Proper Network Interpretability Helps Adversarial Robustness in Classification

1 code implementation ICML 2020 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness Classification +3

Can 3D Adversarial Logos Cloak Humans?

1 code implementation25 Jun 2020 Yi Wang, Jingyang Zhou, Tianlong Chen, Sijia Liu, Shiyu Chang, Chandrajit Bajaj, Zhangyang Wang

Contrary to the traditional adversarial patch, this new form of attack is mapped into the 3D object world and back-propagates to the 2D image domain through differentiable rendering.

Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case

no code implementations ICML 2020 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.

Binary Classification General Classification +1

Solving Constrained CASH Problems with ADMM

no code implementations17 Jun 2020 Parikshit Ram, Sijia Liu, Deepak Vijaykeerthi, Dakuo Wang, Djallel Bouneffouf, Greg Bramble, Horst Samulowitz, Alexander G. Gray

The CASH problem has been widely studied in the context of automated configurations of machine learning (ML) pipelines and various solvers and toolkits are available.

BIG-bench Machine Learning Fairness

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning

no code implementations11 Jun 2020 Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney

Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.

BIG-bench Machine Learning Management

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

1 code implementation CVPR 2020 Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, Zhangyang Wang

We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins (eg, 3. 83% on robust accuracy and 1. 3% on standard accuracy, on the CIFAR-10 dataset), compared with the conventional end-to-end adversarial training baseline.

Adversarial Robustness

Hidden Cost of Randomized Smoothing

no code implementations2 Mar 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei, Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public.

Defending against Backdoor Attack on Deep Neural Networks

no code implementations26 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin

Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks.

Backdoor Attack Data Poisoning

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

no code implementations25 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin

To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.

SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency

no code implementations23 Jan 2020 Zhengang Li, Yifan Gong, Xiaolong Ma, Sijia Liu, Mengshu Sun, Zheng Zhan, Zhenglun Kong, Geng Yuan, Yanzhi Wang

Structured weight pruning is a representative model compression technique of DNNs for hardware efficiency and inference accelerations.

Model Compression

An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices

no code implementations ECCV 2020 Xiaolong Ma, Wei Niu, Tianyun Zhang, Sijia Liu, Sheng Lin, Hongjia Li, Xiang Chen, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang

Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms.

Code Generation Compiler Optimization

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

1 code implementation19 Dec 2019 Jeet Mohapatra, Tsui-Wei, Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task.

Image Classification

Clinical Concept Extraction: a Methodology Review

no code implementations24 Oct 2019 Sunyang Fu, David Chen, Huan He, Sijia Liu, Sungrim Moon, Kevin J Peterson, Feichen Shen, Li-Wei Wang, Yanshan Wang, Andrew Wen, Yiqing Zhao, Sunghwan Sohn, Hongfang Liu

Background Concept extraction, a subdomain of natural language processing (NLP) with a focus on extracting concepts of interest, has been adopted to computationally extract clinical information from text for a wide range of applications ranging from clinical decision support to care quality improvement.

Clinical Concept Extraction Decision Making

Adversarial T-shirt! Evading Person Detectors in A Physical World

1 code implementation ECCV 2020 Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, Xue Lin

To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts.

Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing

no code implementations ICML 2020 Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush R. Varshney

Moreover, the same classifier yields the lack of a trade-off with respect to ideal distributions while yielding a trade-off when accuracy is measured with respect to the given (possibly biased) dataset.

Fairness Two-sample testing

ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

1 code implementation NeurIPS 2019 Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox

In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime.

Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML

1 code implementation30 Sep 2019 Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Minyi Hong, Una-May O'Reilly

In this paper, we study the problem of constrained robust (min-max) optimization ina black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.

Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness

no code implementations25 Sep 2019 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.

Adversarial Attack Adversarial Robustness

Efficient Training of Robust and Verifiable Neural Networks

no code implementations25 Sep 2019 Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

We propose that many common certified defenses can be viewed under a unified framework of regularization.

Visual Interpretability Alone Helps Adversarial Robustness

no code implementations25 Sep 2019 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness

SPROUT: Self-Progressing Robust Training

no code implementations25 Sep 2019 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems.

Adversarial Robustness

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

1 code implementation ICLR 2020 Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh

We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.

Adversarial Attack Adversarial Robustness +1

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

1 code implementation ICCV 2019 Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.

Adversarial Attack Bayesian Optimization +1

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

1 code implementation10 Jun 2019 Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification.

Adversarial Robustness Classification +2

Adversarial Attack Generation Empowered by Min-Max Optimization

1 code implementation NeurIPS 2021 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

In this paper, we show how a general framework of min-max optimization over multiple domains can be leveraged to advance the design of different types of adversarial attacks.

Adversarial Attack Adversarial Robustness

signSGD via Zeroth-Order Oracle

no code implementations ICLR 2019 Sijia Liu, Pin-Yu Chen, Xiangyi Chen, Mingyi Hong

Our study shows that ZO signSGD requires $\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\sqrt{d}/\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.

Image Classification Stochastic Optimization

An ADMM Based Framework for AutoML Pipeline Configuration

no code implementations1 May 2019 Sijia Liu, Parikshit Ram, Deepak Vijaykeerthy, Djallel Bouneffouf, Gregory Bramble, Horst Samulowitz, Dakuo Wang, Andrew Conn, Alexander Gray

We study the AutoML problem of automatically configuring machine learning pipelines by jointly selecting algorithms and their appropriate hyper-parameters for all steps in supervised learning pipelines.

AutoML Binary Classification

Interpreting Adversarial Examples by Activation Promotion and Suppression

no code implementations3 Apr 2019 Kaidi Xu, Sijia Liu, Gaoyuan Zhang, Mengshu Sun, Pu Zhao, Quanfu Fan, Chuang Gan, Xue Lin

It is widely known that convolutional neural networks (CNNs) are vulnerable to adversarial examples: images with imperceptible perturbations crafted to fool classifiers.

Adversarial Robustness

Adversarial Robustness vs Model Compression, or Both?

1 code implementation29 Mar 2019 Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

Adversarial Robustness Model Compression +1

Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM

2 code implementations23 Mar 2019 Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang

A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results.

Model Compression Quantization

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

2 code implementations29 Nov 2018 Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks.

Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR

no code implementations24 Sep 2018 Pin-Yu Chen, Bhanukiran Vinzamuri, Sijia Liu

Many state-of-the-art machine learning models such as deep neural networks have recently shown to be vulnerable to adversarial perturbations, especially in classification tasks.

BIG-bench Machine Learning Clustering +1

On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization

no code implementations ICLR 2019 Xiangyi Chen, Sijia Liu, Ruoyu Sun, Mingyi Hong

We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization.

Open-Ended Question Answering Stochastic Optimization

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

1 code implementation ICLR 2019 Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, huan zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin

When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.

Adversarial Attack

Latent heterogeneous multilayer community detection

no code implementations16 Jun 2018 Hafiz Tiomoko Ali, Sijia Liu, Yasin Yilmaz, Romain Couillet, Indika Rajapakse, Alfred Hero

We propose a method for simultaneously detecting shared and unshared communities in heterogeneous multilayer weighted and undirected networks.

Community Detection

Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications

1 code implementation30 May 2018 Pin-Yu Chen, Lingfei Wu, Sijia Liu, Indika Rajapakse

The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence.

Anomaly Detection Graph Similarity

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

1 code implementation30 May 2018 Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.

Adversarial Robustness

Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization

1 code implementation NeurIPS 2018 Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Pai-Shun Ting, Shiyu Chang, Lisa Amini

As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying.

Material Classification Stochastic Optimization

An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

no code implementations9 Apr 2018 Pu Zhao, Sijia Liu, Yanzhi Wang, Xue Lin

In the literature, the added distortions are usually measured by L0, L1, L2, and L infinity norms, namely, L0, L1, L2, and L infinity attacks, respectively.

Adversarial Attack

A Comparison of Word Embeddings for the Biomedical Natural Language Processing

2 code implementations1 Feb 2018 Yanshan Wang, Sijia Liu, Naveed Afzal, Majid Rastegar-Mojarad, Li-Wei Wang, Feichen Shen, Paul Kingsbury, Hongfang Liu

First, the word embeddings trained on clinical notes and biomedical publications can capture the semantics of medical terms better, and find more relevant similar medical terms, and are closer to human experts' judgments, compared to these trained on Wikipedia and news.

Information Retrieval

A New Data-Driven Sparse-Learning Approach to Study Chemical Reaction Networks

no code implementations18 Dec 2017 Farshad Harirchi, Doohyun Kim, Omar A. Khalil, Sijia Liu, Paolo Elvati, Angela Violi, Alfred O. Hero

In this paper, we introduce a novel approach for the identification of the influential reactions in chemical reaction networks for combustion applications, using a data-driven sparse-learning technique.

Sparse Learning

A Data-Driven Sparse-Learning Approach to Model Reduction in Chemical Reaction Networks

no code implementations12 Dec 2017 Farshad Harirchi, Omar A. Khalil, Sijia Liu, Paolo Elvati, Angela Violi, Alfred O. Hero

In this paper, we propose an optimization-based sparse learning approach to identify the set of most influential reactions in a chemical reaction network.

Sparse Learning

Semiblind subgraph reconstruction in Gaussian graphical models

no code implementations15 Nov 2017 Tianpei Xie, Sijia Liu, Alfred O. Hero III

Consider a social network where only a few nodes (agents) have meaningful interactions in the sense that the conditional dependency graph over node attribute variables (behaviors) is sparse.

Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications

no code implementations21 Oct 2017 Sijia Liu, Jie Chen, Pin-Yu Chen, Alfred O. Hero

In this paper, we design and analyze a new zeroth-order online algorithm, namely, the zeroth-order online alternating direction method of multipliers (ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers.

A Memristor-Based Optimization Framework for AI Applications

no code implementations18 Oct 2017 Sijia Liu, Yanzhi Wang, Makan Fardad, Pramod K. Varshney

In addition to ADMM, implementation of a customized power iteration (PI) method for eigenvalue/eigenvector computation using memristor crossbars is discussed.

BIG-bench Machine Learning

Bias-Variance Tradeoff of Graph Laplacian Regularizer

no code implementations2 Jun 2017 Pin-Yu Chen, Sijia Liu

This paper presents a bias-variance tradeoff of graph Laplacian regularizer, which is widely used in graph signal processing and semi-supervised learning tasks.

Accelerated Distributed Dual Averaging over Evolving Networks of Growing Connectivity

no code implementations18 Apr 2017 Sijia Liu, Pin-Yu Chen, Alfred O. Hero

Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis.

Distributed Optimization Scheduling

Learning Sparse Graphs Under Smoothness Prior

no code implementations12 Sep 2016 Sundeep Prabhakar Chepuri, Sijia Liu, Geert Leus, Alfred O. Hero III

Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.

Denoising Graph Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.