Search Results for author: Pin-Yu Chen

Found 211 papers, 86 papers with code

Diagnostic Spatio-temporal Transformer with Faithful Encoding

no code implementations26 May 2023 Jokin Labaien, Tsuyoshi Idé, Pin-Yu Chen, Ekhi Zugasti, Xabier De Carlos

This paper addresses the task of anomaly diagnosis when the underlying data generation process has a complex spatio-temporal (ST) dependency.

Time Series Classification

Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression

no code implementations25 May 2023 Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, Baharan Mirzasoleiman

However, supervised CL is prone to collapsing representations of subclasses within a class by not capturing all their features, and unsupervised CL may suppress harder class-relevant features by focusing on learning easy class-irrelevant features; both significantly compromise representation quality.

Contrastive Learning Representation Learning

Virus2Vec: Viral Sequence Classification Using Machine Learning

no code implementations24 Apr 2023 Sarwan Ali, Babatunde Bello, Prakash Chourasia, Ria Thazhe Punathil, Pin-Yu Chen, Imdad Ullah Khan, Murray Patterson

Understanding the host-specificity of different families of viruses sheds light on the origin of, e. g., SARS-CoV-2, rabies, and other such zoonotic pathogens in humans.

Classification Specificity

GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models

no code implementations19 Apr 2023 Zaitang Li, Pin-Yu Chen, Tsung-Yi Ho

Formally, GREAT Score carries the physical meaning of a global statistic capturing a mean certified attack-proof perturbation level over all samples drawn from a generative model.

Adversarial Robustness

Overload: Latency Attacks on Object Detection for Edge Devices

no code implementations11 Apr 2023 Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee

Our method is based on a newly formulated optimization problem and a novel technique, called spatial attention, to increase the inference time of object detection.

object-detection Object Detection

Exploring the Benefits of Visual Prompting in Differential Privacy

no code implementations22 Mar 2023 Yizhe Li, Yu-Lin Tsai, Xuebin Ren, Chia-Mu Yu, Pin-Yu Chen

Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient adaptation to downstream tasks by engineering a well-trained frozen source model.

Image Classification Transfer Learning +1

Convex Bounds on the Softmax Function with Applications to Robustness Verification

1 code implementation3 Mar 2023 Dennis Wei, Haoze Wu, Min Wu, Pin-Yu Chen, Clark Barrett, Eitan Farchi

The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.

MultiRobustBench: Benchmarking Robustness Against Multiple Attacks

no code implementations21 Feb 2023 Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, Prateek Mittal

Using our framework, we present the first leaderboard, MultiRobustBench, for benchmarking multiattack evaluation which captures performance across attack types and attack strengths.

Benchmarking

A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity

no code implementations12 Feb 2023 Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen

Based on a data model characterizing both label-relevant and label-irrelevant tokens, this paper provides the first theoretical analysis of training a shallow ViT, i. e., one self-attention layer followed by a two-layer perceptron, for a classification task.

Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks

no code implementations6 Feb 2023 Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, Miao Liu

Due to the significant computational challenge of training large-scale graph neural networks (GNNs), various sparse learning techniques have been exploited to reduce memory and storage costs.

Sparse Learning

Certified Interpretability Robustness for Class Activation Mapping

no code implementations26 Jan 2023 Alex Gu, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

Interpreting machine learning models is challenging but crucial for ensuring the safety of deep networks in autonomous driving systems.

Autonomous Driving

AI Maintenance: A Robustness Perspective

no code implementations8 Jan 2023 Pin-Yu Chen, Payel Das

With the advancements in machine learning (ML) methods and compute resources, artificial intelligence (AI) empowered systems are becoming a prevailing technology.

Reprogramming Pretrained Language Models for Protein Sequence Representation Learning

no code implementations5 Jan 2023 Ria Vinod, Pin-Yu Chen, Payel Das

To this end, we reprogram an off-the-shelf pre-trained English language transformer and benchmark it on a set of protein physicochemical prediction tasks (secondary structure, stability, homology, stability) as well as on a biomedically relevant set of protein function prediction tasks (antimicrobial, toxicity, antibody affinity).

Dictionary Learning Language Modelling +2

Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation Constrained Optimization

no code implementations19 Dec 2022 Zichong Li, Pin-Yu Chen, Sijia Liu, Songtao Lu, Yangyang Xu

In this paper, we design and analyze stochastic inexact augmented Lagrangian methods (Stoc-iALM) to solve problems involving a nonconvex composite (i. e. smooth+nonsmooth) objective and nonconvex smooth functional constraints.

Fairness

Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?

no code implementations16 Dec 2022 Ming-Chang Chiu, Pin-Yu Chen, Xuezhe Ma

Lastly, aside from less dependence on spurious correlations and better generalization on in-distribution test sets, we also show superior out-of-distribution results on CIFAR10. 1 and competitive performances on CIFAR10-C and CIFAR100-C.

Data Augmentation Image Classification

On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study

no code implementations16 Dec 2022 Ming-Chang Chiu, Yingfei Wang, Derrick Eui Gyu Kim, Pin-Yu Chen, Xuezhe Ma

It is well established in neuroscience that color vision plays an essential part in the human visual perception system.

Data Augmentation

How to Backdoor Diffusion Models?

1 code implementation CVPR 2023 Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho

To gain a better understanding of the limitations and potential risks, this paper presents the first study on the robustness of diffusion models against backdoor attacks.

Backdoor Attack Denoising +1

NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration

no code implementations29 Nov 2022 Lei Hsiung, Yung-Chen Tang, Pin-Yu Chen, Tsung-Yi Ho

With the advancement of deep learning technology, neural networks have demonstrated their excellent ability to provide accurate predictions in many tasks.

Understanding and Improving Visual Prompting: A Label-Mapping Perspective

1 code implementation CVPR 2023 Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, Sijia Liu

As highlighted below, we show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target tasks, our method outperforms baselines by a substantial margin, e. g., 7. 9% and 6. 7% accuracy improvements in transfer learning to the target Flowers102 and CIFAR100 datasets.

Transfer Learning Visual Prompting

Low-Resource Music Genre Classification with Cross-Modal Neural Model Reprogramming

1 code implementation2 Nov 2022 Yun-Ning Hung, Chao-Han Huck Yang, Pin-Yu Chen, Alexander Lerch

In this work, we introduce a novel method for leveraging pre-trained models for low-resource (music) classification based on the concept of Neural Model Reprogramming (NMR).

Classification Genre classification +3

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

no code implementations2 Nov 2022 Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo

Recently, quantum classifiers have been found to be vulnerable to adversarial attacks, in which quantum classifiers are deceived by imperceptible noises, leading to misclassification.

Inference and Denoise: Causal Inference-based Neural Speech Enhancement

1 code implementation2 Nov 2022 Tsun-An Hsieh, Chao-Han Huck Yang, Pin-Yu Chen, Sabato Marco Siniscalchi, Yu Tsao

This study addresses the speech enhancement (SE) task within the causal inference paradigm by modeling the noise presence as an intervention.

Causal Inference Speech Enhancement

An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization

1 code implementation27 Oct 2022 Elvin Lo, Pin-Yu Chen

Molecule optimization is an important problem in chemical discovery and has been approached using many techniques, including generative modeling, reinforcement learning, genetic algorithms, and much more.

Visual Prompting for Adversarial Robustness

2 code implementations12 Oct 2022 Aochuan Chen, Peter Lorenz, Yuguang Yao, Pin-Yu Chen, Sijia Liu

In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at testing time.

Adversarial Defense Adversarial Robustness +1

Rethinking Normalization Methods in Federated Learning

no code implementations7 Oct 2022 Zhixu Du, Jingwei Sun, Ang Li, Pin-Yu Chen, Jianyi Zhang, Hai "Helen" Li, Yiran Chen

We also show that layer normalization is a better choice in FL which can mitigate the external covariate shift and improve the performance of the global model.

Federated Learning

SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic Data

no code implementations6 Oct 2022 Ching-Yun Ko, Pin-Yu Chen, Jeet Mohapatra, Payel Das, Luca Daniel

Given a pretrained model, the representations of data synthesized from the Gaussian mixture are used to compare with our reference to infer the quality.

Benchmarking Representation Learning

Reprogramming Large Pretrained Language Models for Antibody Sequence Infilling

no code implementations5 Oct 2022 Igor Melnyk, Vijil Chenthamarakshan, Pin-Yu Chen, Payel Das, Amit Dhurandhar, Inkit Padhi, Devleena Das

We introduce Reprogramming for Protein Sequence Infilling, a framework in which pretrained natural language models are repurposed for protein sequence infilling via reprogramming, to infill protein sequence templates as a method of novel protein generation.

Specificity Text Infilling

Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration

no code implementations23 Sep 2022 Yung-Chen Tang, Pin-Yu Chen, Tsung-Yi Ho

Neural network calibration is an essential task in deep learning to ensure consistency between the confidence of model prediction and the true correctness likelihood.

Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks

no code implementations8 Sep 2022 Chulin Xie, Yunhui Long, Pin-Yu Chen, Bo Li

We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels.

Federated Learning

Be Your Own Neighborhood: Detecting Adversarial Example by the Neighborhood Relations Built on Self-Supervised Learning

no code implementations31 Aug 2022 Zhiyuan He, Yijun Yang, Pin-Yu Chen, Qiang Xu, Tsung-Yi Ho

Empowered by the robust relation net built on SSL, we found that BEYOND outperforms baselines in terms of both detection ability and speed.

Self-Supervised Learning

Active Sampling of Multiple Sources for Sequential Estimation

no code implementations10 Aug 2022 Arpan Mukherjee, Ali Tajer, Pin-Yu Chen, Payel Das

Additionally, each process $i\in\{1, \dots, K\}$ has a private parameter $\alpha_i$.

Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM

no code implementations20 Jul 2022 Chulin Xie, Pin-Yu Chen, Ce Zhang, Bo Li

Moreover, we show that a byproduct of our framework is that the weights of learned linear heads reflect the importance of local clients.

Denoising Federated Learning +1

Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence Classification

1 code implementation18 Jul 2022 Sarwan Ali, Bikram Sahoo, Alexander Zelikovskiy, Pin-Yu Chen, Murray Patterson

The rapid spread of the COVID-19 pandemic has resulted in an unprecedented amount of sequence data of the SARS-CoV-2 genome -- millions of sequences and counting.

Benchmarking BIG-bench Machine Learning +1

CARBEN: Composite Adversarial Robustness Benchmark

1 code implementation16 Jul 2022 Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho

Prior literature on adversarial attack methods has mainly focused on attacking with and defending against a single threat model, e. g., perturbations bounded in Lp ball.

Adversarial Attack Adversarial Robustness

Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling

no code implementations7 Jul 2022 Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graph-structured data.

Node Classification

On Certifying and Improving Generalization to Unseen Domains

1 code implementation24 Jun 2022 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild.

Domain Generalization

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

1 code implementation15 Jun 2022 Tianlong Chen, huan zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang

Certifiable robustness is a highly desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios, but often demands tedious computations to establish.

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

1 code implementation13 Jun 2022 Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu

Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.

Distributed Optimization

Theoretical Error Performance Analysis for Variational Quantum Circuit Based Functional Regression

1 code implementation8 Jun 2022 Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hsiu Hsieh

In this work, we first put forth an end-to-end quantum neural network, TTN-VQC, which consists of a quantum tensor network based on a tensor-train network (TTN) for dimensionality reduction and a VQC for functional regression.

Dimensionality Reduction regression

Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning

1 code implementation8 Jun 2022 Momin Abbas, Quan Xiao, Lisha Chen, Pin-Yu Chen, Tianyi Chen

Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning.

Meta-Learning

Learning Geometrically Disentangled Representations of Protein Folding Simulations

no code implementations20 May 2022 N. Joseph Tatro, Payel Das, Pin-Yu Chen, Vijil Chenthamarakshan, Rongjie Lai

Massive molecular simulations of drug-target proteins have been used as a tool to understand disease mechanism and develop therapeutics.

Protein Folding

A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Predictions

1 code implementation NAACL 2022 Yong Xie, Dakuo Wang, Pin-Yu Chen, JinJun Xiong, Sijia Liu, Sanmi Koyejo

More and more investors and machine learning models rely on social media (e. g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements.

Adversarial Attack Combinatorial Optimization +1

Evaluating the Adversarial Robustness for Fourier Neural Operators

no code implementations8 Apr 2022 Abolaji D. Adesoji, Pin-Yu Chen

In recent years, Machine-Learning (ML)-driven approaches have been widely used in scientific discovery domains.

Adversarial Robustness Super-Resolution

Treatment Learning Transformer for Noisy Image Classification

no code implementations29 Mar 2022 Chao-Han Huck Yang, I-Te Danny Hung, Yi-Chieh Liu, Pin-Yu Chen

In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy by jointly estimating their treatment effects.

Benchmarking Classification +3

Towards Creativity Characterization of Generative Models via Group-based Subset Scanning

no code implementations1 Mar 2022 Celia Cintas, Payel Das, Brian Quanz, Girmaw Abebe Tadesse, Skyler Speakman, Pin-Yu Chen

We propose group-based subset scanning to identify, quantify, and characterize creative processes by detecting a subset of anomalous node-activations in the hidden layers of the generative models.

Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning

1 code implementation22 Feb 2022 Pin-Yu Chen

In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models and can even learn general task-agnostic representations for efficient finetuning to downstream tasks.

BIG-bench Machine Learning Transfer Learning

When BERT Meets Quantum Temporal Convolution Learning for Text Classification in Heterogeneous Computing

no code implementations17 Feb 2022 Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Yu Tsao, Pin-Yu Chen

Our experiments on intent classification show that our proposed BERT-QTC model attains competitive experimental results in the Snips and ATIS spoken language datasets.

Federated Learning intent-classification +4

Holistic Adversarial Robustness of Deep Learning Models

no code implementations15 Feb 2022 Pin-Yu Chen, Sijia Liu

Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.

Adversarial Robustness

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

1 code implementation CVPR 2023 Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho

We then propose generalized adversarial training (GAT) to extend model robustness from $\ell_{p}$-ball to composite semantic perturbations, such as the combination of Hue, Saturation, Brightness, Contrast, and Rotation.

Adversarial Robustness Scheduling

Auto-Transfer: Learning to Route Transferrable Representations

1 code implementation2 Feb 2022 Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar

Knowledge transfer between heterogeneous source and target networks and tasks has received a lot of attention in recent times as large amounts of quality labeled data can be difficult to obtain in many applications.

Transfer Learning

How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations21 Jan 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

Neural Capacitance: A New Perspective of Neural Network Selection via Edge Dynamics

no code implementations11 Jan 2022 Chunheng Jiang, Tejaswini Pedapati, Pin-Yu Chen, Yizhou Sun, Jianxi Gao

To this end, we construct a network mapping $\phi$, converting a neural network $G_A$ to a directed line graph $G_B$ that is defined on those edges in $G_A$.

Model Selection

Best Arm Identification in Contaminated Stochastic Bandits

no code implementations NeurIPS 2021 Arpan Mukherjee, Ali Tajer, Pin-Yu Chen, Payel Das

Owing to the adversarial contamination of the rewards, each arm's mean is only partially identifiable.

Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines

no code implementations1 Dec 2021 Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao

To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.

Adversarial Robustness Benchmarking +1

Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations

no code implementations NeurIPS 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Adversarial Robustness Model Compression

Pessimistic Model Selection for Offline Deep Reinforcement Learning

no code implementations29 Nov 2021 Chao-Han Huck Yang, Zhengling Qi, Yifan Cui, Pin-Yu Chen

Deep Reinforcement Learning (DRL) has demonstrated great potentials in solving sequential decision making problems in many applications.

Decision Making Model Selection +2

Make an Omelette with Breaking Eggs: Zero-Shot Learning for Novel Attribute Synthesis

no code implementations28 Nov 2021 Yu-Hsuan Li, Tzu-Yin Chao, Ching-Chun Huang, Pin-Yu Chen, Wei-Chen Chiu

Basically, given only a small set of detectors that are learned to recognize some manually annotated attributes (i. e., the seen attributes), we aim to synthesize the detectors of novel attributes in a zero-shot learning manner.

Classification Zero-Shot Learning

Meta Adversarial Perturbations

no code implementations AAAI Workshop AdvML 2022 Chia-Hung Yuan, Pin-Yu Chen, Chia-Mu Yu

A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack.

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

1 code implementation26 Oct 2021 Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen

We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).

Federated Learning

How and When Adversarial Robustness Transfers in Knowledge Distillation?

no code implementations22 Oct 2021 Rulin Shao, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Our comprehensive analysis shows several novel insights that (1) With KDIGA, students can preserve or even exceed the adversarial robustness of the teacher model, even when their models have fundamentally different architectures; (2) KDIGA enables robustness to transfer to pre-trained students, such as KD from an adversarially trained ResNet to a pre-trained ViT, without loss of clean accuracy; and (3) Our derived local linearity bounds for characterizing adversarial robustness in KD are consistent with the empirical results.

Adversarial Robustness Knowledge Distillation +1

Robust Event Classification Using Imperfect Real-world PMU Data

no code implementations19 Oct 2021 Yunchuan Liu, Lei Yang, Amir Ghasemkhani, Hanif Livani, Virgilio A. Centeno, Pin-Yu Chen, Junshan Zhang

Specifically, the data preprocessing step addresses the data quality issues of PMU measurements (e. g., bad data and missing data); in the fine-grained event data extraction step, a model-free event detection method is developed to accurately localize the events from the inaccurate event timestamps in the event logs; and the feature engineering step constructs the event features based on the patterns of different event types, in order to improve the performance and the interpretability of the event classifiers.

Classification Event Detection +1

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

no code implementations12 Oct 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.

Neural Model Reprogramming with Similarity Based Mapping for Low-Resource Spoken Command Classification

1 code implementation8 Oct 2021 Hao Yen, Pin-Jui Ku, Chao-Han Huck Yang, Hu Hu, Sabato Marco Siniscalchi, Pin-Yu Chen, Yu Tsao

In this study, we propose a novel adversarial reprogramming (AR) approach for low-resource spoken command recognition (SCR), and build an AR-SCR system.

Spoken Command Recognition Transfer Learning

QTN-VQC: An End-to-End Learning framework for Quantum Neural Networks

no code implementations6 Oct 2021 Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen

The advent of noisy intermediate-scale quantum (NISQ) computers raises a crucial challenge to design quantum neural networks for fully quantum learning tasks.

How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations ICLR 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

Tactics on Refining Decision Boundary for Improving Certification-based Robust Training

no code implementations29 Sep 2021 Wang Zhang, Lam M. Nguyen, Subhro Das, Pin-Yu Chen, Sijia Liu, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng

In verification-based robust training, existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation.

Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks

no code implementations29 Sep 2021 Washington Garcia, Pin-Yu Chen, Somesh Jha, Hamilton Scott Clouse, Kevin R. B. Butler

It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors.

Dimensionality Reduction Image Classification

Auto-Transfer: Learning to Route Transferable Representations

no code implementations ICLR 2022 Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar

Knowledge transfer between heterogeneous source and target networks and tasks has received a lot of attention in recent times as large amounts of quality labelled data can be difficult to obtain in many applications.

Transfer Learning

Benchmarking Machine Learning Robustness in Covid-19 Spike Sequence Classification

no code implementations29 Sep 2021 Sarwan Ali, Bikram Sahoo, Pin-Yu Chen, Murray Patterson

The rapid spread of the COVID-19 pandemic has resulted in an unprecedented amount of sequence data of the SARS-CoV-2 viral genome --- millions of sequences and counting.

Benchmarking BIG-bench Machine Learning +1

Certified Robustness for Free in Differentially Private Federated Learning

no code implementations29 Sep 2021 Chulin Xie, Yunhui Long, Pin-Yu Chen, Krishnaram Kenthapadi, Bo Li

Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users.

Federated Learning

Real-World Adversarial Examples involving Makeup Application

no code implementations4 Sep 2021 Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu

The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16.

Adversarial Attack Face Recognition +1

Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

1 code implementation NeurIPS 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.

Data Poisoning Domain Generalization +1

MAML is a Noisy Contrastive Learner in Classification

1 code implementation ICLR 2022 Chia-Hsiang Kao, Wei-Chen Chiu, Pin-Yu Chen

Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms, achieving remarkable success in various learning problems.

Classification Few-Shot Learning

Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design

1 code implementation24 Jun 2021 Yue Cao, Payel Das, Vijil Chenthamarakshan, Pin-Yu Chen, Igor Melnyk, Yang shen

Designing novel protein sequences for a desired 3D topological fold is a fundamental yet non-trivial task in protein engineering.

Generalizing Adversarial Training to Composite Semantic Perturbations

no code implementations ICML Workshop AML 2021 Yun-Yun Tsai, Lei Hsiung, Pin-Yu Chen, Tsung-Yi Ho

We then propose generalized adversarial training (GAT) to extend model robustness from $\ell_{p}$ norm to composite semantic perturbations, such as Hue, Saturation, Brightness, Contrast, and Rotation.

Scheduling

CRFL: Certifiably Robust Federated Learning against Backdoor Attacks

1 code implementation15 Jun 2021 Chulin Xie, Minghao Chen, Pin-Yu Chen, Bo Li

Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.

Federated Learning

Predicting Deep Neural Network Generalization with Perturbation Response Curves

no code implementations NeurIPS 2021 Yair Schiff, Brian Quanz, Payel Das, Pin-Yu Chen

However, despite these successes, the recent Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition suggests that there is a need for more robust and efficient measures of network generalization.

Simple Transparent Adversarial Examples

no code implementations20 May 2021 Jaydeep Borkar, Pin-Yu Chen

We propose two new aspects of adversarial image generation methods and evaluate them on the robustness of Google Cloud Vision API's optical character recognition service and object detection APIs deployed in real-world settings such as sightengine. com, picpurify. com, Google Cloud Vision API, and Microsoft Azure's Computer Vision API.

Image Generation object-detection +2

Vision Transformers are Robust Learners

1 code implementation17 May 2021 Sayak Paul, Pin-Yu Chen

Transformers, composed of multiple self-attention layers, hold strong promises toward a generic learning primitive applicable to different data modalities, including the recent breakthroughs in computer vision achieving state-of-the-art (SOTA) standard accuracy.

Anomaly Detection Image Classification +1

High-Robustness, Low-Transferability Fingerprinting of Neural Networks

no code implementations14 May 2021 Siyue Wang, Xiao Wang, Pin-Yu Chen, Pu Zhao, Xue Lin

This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks, featuring high-robustness to the base model against model pruning as well as low-transferability to unassociated models.

Vocal Bursts Intensity Prediction

Gi and Pal Scores: Deep Neural Network Generalization Statistics

no code implementations8 Apr 2021 Yair Schiff, Brian Quanz, Payel Das, Pin-Yu Chen

The field of Deep Learning is rich with empirical evidence of human-like performance on a variety of regression, classification, and control tasks.

regression

Towards creativity characterization of generative models via group-based subset scanning

no code implementations1 Apr 2021 Celia Cintas, Payel Das, Brian Quanz, Skyler Speakman, Victor Akinwande, Pin-Yu Chen

We propose group-based subset scanning to quantify, detect, and characterize creative processes by detecting a subset of anomalous node-activations in the hidden layers of generative models.

On the Adversarial Robustness of Vision Transformers

1 code implementation29 Mar 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Following the success in advancing natural language processing and understanding, transformers are expected to bring revolutionary changes to computer vision.

Adversarial Robustness

Don't Forget to Sign the Gradients!

1 code implementation5 Mar 2021 Omid Aramoon, Pin-Yu Chen, Gang Qu

Engineering a top-notch deep learning model is an expensive procedure that involves collecting data, hiring human resources with expertise in machine learning, and providing high computational resources.

Image Classification

Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples

no code implementations4 Mar 2021 Washington Garcia, Pin-Yu Chen, Somesh Jha, Scott Clouse, Kevin R. B. Butler

It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors.

Dimensionality Reduction Image Classification

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

no code implementations3 Mar 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Model Compression

Adversarial Examples can be Effective Data Augmentation for Unsupervised Machine Learning

1 code implementation2 Mar 2021 Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu

In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.

BIG-bench Machine Learning Contrastive Learning +2

Domain Adaptation for Learning Generator from Paired Few-Shot Data

no code implementations25 Feb 2021 Chun-Chih Teng, Pin-Yu Chen, Wei-Chen Chiu

We propose a Paired Few-shot GAN (PFS-GAN) model for learning generators with sufficient source data and a few target data.

Domain Adaptation Few-Shot Learning

Non-Singular Adversarial Robustness of Neural Networks

no code implementations23 Feb 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.

Adversarial Robustness

On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning

1 code implementation ICLR 2021 Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang

Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.

Adversarial Attack Adversarial Robustness +3

Training a Resilient Q-Network against Observational Interference

1 code implementation18 Feb 2021 Chao-Han Huck Yang, I-Te Danny Hung, Yi Ouyang, Pin-Yu Chen

Deep reinforcement learning (DRL) has demonstrated impressive performance in various gaming simulators and real-world applications.

Causal Inference

Meta Federated Learning

no code implementations10 Feb 2021 Omid Aramoon, Pin-Yu Chen, Gang Qu, Yuan Tian

Due to its distributed methodology alongside its privacy-preserving features, Federated Learning (FL) is vulnerable to training time adversarial attacks.

Federated Learning Privacy Preserving

Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning

no code implementations1 Feb 2021 Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, Feng Yan

Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.

Federated Learning

Fast Training of Provably Robust Neural Networks by SingleProp

no code implementations1 Feb 2021 Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Luca Daniel

Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees.

Fake it Till You Make it: Self-Supervised Semantic Shifts for Monolingual Word Embedding Tasks

no code implementations30 Jan 2021 Maurício Gruppi, Sibel Adali, Pin-Yu Chen

The goal of LSC is to characterize and quantify language variations with respect to word meaning, to measure how distinct two language sources are (that is, people or language models).

Adversarial Sample Enhanced Domain Adaptation: A Case Study on Predictive Modeling with Electronic Health Records

no code implementations13 Jan 2021 Yiqin Yu, Pin-Yu Chen, Yuan Zhou, Jing Mei

With the successful adoption of machine learning on electronic health records (EHRs), numerous computational models have been deployed to address a variety of clinical problems.

Data Augmentation Domain Adaptation

Robust Text CAPTCHAs Using Adversarial Examples

no code implementations7 Jan 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

At the second stage, we design and apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.

Adversarial Attack Optical Character Recognition (OCR)

ProGAE: A Geometric Autoencoder-based Generative Model for Disentangling Protein Dynamics

no code implementations1 Jan 2021 Norman Joseph Tatro, Payel Das, Pin-Yu Chen, Vijil Chenthamarakshan, Rongjie Lai

Empowered by the disentangled latent space learning, the extrinsic latent embedding is successfully used for classification or property prediction of different drugs bound to a specific protein.

CAFE: Catastrophic Data Leakage in Federated Learning

no code implementations1 Jan 2021 Xiao Jin, Ruijie Du, Pin-Yu Chen, Tianyi Chen

In this paper, we revisit this defense premise and propose an advanced data leakage attack to efficiently recover batch data from the shared aggregated gradients.

Federated Learning

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks

no code implementations NeurIPS 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.

Self-Progressing Robust Training

1 code implementation22 Dec 2020 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems.

Adversarial Robustness

Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework

no code implementations21 Dec 2020 Pranay Sharma, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Xue Lin, Pramod K. Varshney

In this work, we focus on the study of stochastic zeroth-order (ZO) optimization which does not require first-order gradient information and uses only function evaluations.

Reprogramming Language Models for Molecular Representation Learning

no code implementations7 Dec 2020 Ria Vinod, Pin-Yu Chen, Payel Das

Recent advancements in transfer learning have made it a promising approach for domain adaptation via transfer of learned representations.

Dictionary Learning Domain Adaptation +2

How Robust are Randomized Smoothing based Defenses to Data Poisoning?

1 code implementation CVPR 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.

Adversarial Robustness Bilevel Optimization +2

SChME at SemEval-2020 Task 1: A Model Ensemble for Detecting Lexical Semantic Change

1 code implementation SEMEVAL 2020 Maurício Gruppi, Sibel Adali, Pin-Yu Chen

Our results show evidence that the number of landmarks used for alignment has a directimpact on the predictive performance of the model.

Change Detection Word Embeddings

Optimizing Molecules using Efficient Queries from Property Evaluations

1 code implementation3 Nov 2020 Samuel Hoffman, Vijil Chenthamarakshan, Kahini Wadhawan, Pin-Yu Chen, Payel Das

Machine learning based methods have shown potential for optimizing existing molecules with more desirable properties, a critical step towards accelerating new chemical discovery.

Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition

2 code implementations26 Oct 2020 Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Pin-Yu Chen, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95. 12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features.

 Ranked #1 on Keyword Spotting on Google Speech Commands (10-keyword Speech Commands dataset metric)

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Higher-Order Certification for Randomized Smoothing

no code implementations NeurIPS 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

We also provide a framework that generalizes the calculation for certification using higher-order information.

Optimizing Mode Connectivity via Neuron Alignment

1 code implementation NeurIPS 2020 N. Joseph Tatro, Pin-Yu Chen, Payel Das, Igor Melnyk, Prasanna Sattigeri, Rongjie Lai

Yet, current curve finding algorithms do not consider the influence of symmetry in the loss surface created by model weight permutations.

Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

1 code implementation ECCV 2020 Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, JinJun Xiong, Meng Wang

When the training data are maliciously tampered, the predictions of the acquired deep neural network (DNN) can be manipulated by an adversary known as the Trojan attack (or poisoning backdoor attack).

Backdoor Attack

Proper Network Interpretability Helps Adversarial Robustness in Classification

1 code implementation ICML 2020 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness Classification +3

Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case

no code implementations ICML 2020 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.

Binary Classification General Classification +1

A Dynamical Systems Approach for Convergence of the Bayesian EM Algorithm

no code implementations23 Jun 2020 Orlando Romero, Subhro Das, Pin-Yu Chen, Sérgio Pequito

Out of the recent advances in systems and control (S\&C)-based analysis of optimization algorithms, not enough work has been specifically dedicated to machine learning (ML) algorithms and its applications.

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning

no code implementations11 Jun 2020 Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney

Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.

BIG-bench Machine Learning Management

DBA: Distributed Backdoor Attacks against Federated Learning

2 code implementations ICLR 2020 Chulin Xie, Keli Huang, Pin-Yu Chen, Bo Li

Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.

Backdoor Attack Feature Importance +1

Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

3 code implementations ICLR 2020 Pu Zhao, Pin-Yu Chen, Payel Das, Karthikeyan Natesan Ramamurthy, Xue Lin

In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness.

Adversarial Robustness

Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement

no code implementations31 Mar 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Chin-Hui Lee

Recent studies have highlighted adversarial examples as ubiquitous threats to the deep neural network (DNN) based speech recognition systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Hidden Cost of Randomized Smoothing

no code implementations2 Mar 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei, Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public.

Defending against Backdoor Attack on Deep Neural Networks

no code implementations26 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin

Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks.

Backdoor Attack Data Poisoning

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

no code implementations25 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin

To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.

Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning

no code implementations20 Feb 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Yi Ouyang, I-Te Danny Hung, Chin-Hui Lee, Xiaoli Ma

Recent deep neural networks based techniques, especially those equipped with the ability of self-adaptation in the system level such as deep reinforcement learning (DRL), are shown to possess many advantages of optimizing robot learning systems (e. g., autonomous navigation and continuous robot arm control.)

Autonomous Navigation reinforcement-learning +1

AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks

no code implementations19 Feb 2020 Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin

Designing effective defense against adversarial attacks is a crucial topic as deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars.

Malware Detection Self-Driving Cars

Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent

1 code implementation18 Feb 2020 Pu Zhao, Pin-Yu Chen, Siyue Wang, Xue Lin

Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability.

Adversarial Attack Image Classification

Block Switching: A Stochastic Approach for Deep Learning Security

no code implementations18 Feb 2020 Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin

Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models.

CAT: Customized Adversarial Training for Improved Robustness

no code implementations17 Feb 2020 Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh

Adversarial training has become one of the most effective methods for improving robustness of neural networks.

Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States

1 code implementation9 Feb 2020 Yunan Ye, Hengzhi Pei, Boxin Wang, Pin-Yu Chen, Yada Zhu, Jun Xiao, Bo Li

Our framework aims to address two unique challenges in financial PM: (1) data heterogeneity -- the collected information for each asset is usually diverse, noisy and imbalanced (e. g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.

Management reinforcement-learning +1

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

1 code implementation19 Dec 2019 Jeet Mohapatra, Tsui-Wei, Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task.

Image Classification

Adversarial T-shirt! Evading Person Detectors in A Physical World

1 code implementation ECCV 2020 Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, Xue Lin

To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts.

Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing

no code implementations ICML 2020 Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush R. Varshney

Moreover, the same classifier yields the lack of a trade-off with respect to ideal distributions while yielding a trade-off when accuracy is measured with respect to the given (possibly biased) dataset.

Fairness Two-sample testing

Efficient Training of Robust and Verifiable Neural Networks

no code implementations25 Sep 2019 Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

We propose that many common certified defenses can be viewed under a unified framework of regularization.

Visual Interpretability Alone Helps Adversarial Robustness

no code implementations25 Sep 2019 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness

SPROUT: Self-Progressing Robust Training

no code implementations25 Sep 2019 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems.

Adversarial Robustness

Optimizing Loss Landscape Connectivity via Neuron Alignment

no code implementations25 Sep 2019 N. Joseph Tatro, Pin-Yu Chen, Payel Das, Igor Melnyk, Prasanna Sattigeri, Rongjie Lai

Empirically, this initialization is critical for efficiently learning a simple, planar, low-loss curve between networks that successfully generalizes.

Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness

no code implementations25 Sep 2019 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.

Adversarial Attack Adversarial Robustness

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

1 code implementation ICLR 2020 Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh

We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.

Adversarial Attack Adversarial Robustness +1

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

1 code implementation20 Aug 2019 Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin

However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e. g., large drop in test accuracy.

Adversarial Robustness

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

1 code implementation ICCV 2019 Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.

Adversarial Attack Bayesian Optimization +1

Variational Quantum Circuits for Deep Reinforcement Learning

1 code implementation30 Jun 2019 Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Hsi-Sheng Goan

To the best of our knowledge, this work is the first proof-of-principle demonstration of variational quantum circuits to approximate the deep $Q$-value function for decision-making and policy-selection reinforcement learning with experience replay and target network.

BIG-bench Machine Learning Decision Making +3

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

1 code implementation10 Jun 2019 Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification.

Adversarial Robustness Classification +2

Adversarial Attack Generation Empowered by Min-Max Optimization

1 code implementation NeurIPS 2021 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

In this paper, we show how a general framework of min-max optimization over multiple domains can be leveraged to advance the design of different types of adversarial attacks.

Adversarial Attack Adversarial Robustness

Model Agnostic Contrastive Explanations for Structured Data

no code implementations31 May 2019 Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, Ruchir Puri

Recently, a method [7] was proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model.

Leveraging Latent Features for Local Explanations

2 code implementations29 May 2019 Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Yunfeng Zhang, Karthikeyan Shanmugam, Chun-Chen Tu

As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level.

General Classification Self-Driving Cars

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

no code implementations ICLR 2019 Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

BIG-bench Machine Learning

signSGD via Zeroth-Order Oracle

no code implementations ICLR 2019 Sijia Liu, Pin-Yu Chen, Xiangyi Chen, Mingyi Hong

Our study shows that ZO signSGD requires $\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\sqrt{d}/\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.

Image Classification Stochastic Optimization

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

1 code implementation9 Feb 2019 Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Xiaoli Ma, Yi-Chang James Tsai

To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation.

Causal Inference Visual Reasoning

Toward A Neuro-inspired Creative Decoder

no code implementations6 Feb 2019 Payel Das, Brian Quanz, Pin-Yu Chen, Jae-wook Ahn, Dhruv Shah

Creativity, a process that generates novel and meaningful ideas, involves increased association between task-positive (control) and task-negative (default) networks in the human brain.

PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach

no code implementations18 Dec 2018 Tsui-Wei Weng, Pin-Yu Chen, Lam M. Nguyen, Mark S. Squillante, Ivan Oseledets, Luca Daniel

With deep neural networks providing state-of-the-art machine learning models for numerous machine learning tasks, quantifying the robustness of these models has become an important area of research.

BIG-bench Machine Learning

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

2 code implementations29 Nov 2018 Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks.

Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

1 code implementation14 Nov 2018 Rise Ooi, Chao-Han Huck Yang, Pin-Yu Chen, Vìctor Eguìluz, Narsis Kiani, Hector Zenil, David Gomez-Cabrero, Jesper Tegnèr

Next, (2) the learned networks are technically controllable as only a small number of driver nodes are required to move the system to a new state.

Transfer Learning

Efficient Neural Network Robustness Certification with General Activation Functions

13 code implementations NeurIPS 2018 Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.

Efficient Neural Network

Word Mover's Embedding: From Word2Vec to Document Embedding

1 code implementation EMNLP 2018 Lingfei Wu, Ian E. H. Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, Michael J. Witbrock

While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings.

Document Embedding General Classification +5

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

Characterizing Audio Adversarial Examples Using Temporal Dependency

no code implementations ICLR 2019 Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song

In particular, our results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples.

Adversarial Defense Automatic Speech Recognition +2

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

no code implementations24 Sep 2018 Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu

Recent studies have found that deep learning systems are vulnerable to adversarial examples; e. g., visually unrecognizable adversarial images can easily be crafted to result in misclassification.