Search Results for author: Pin-Yu Chen

Found 149 papers, 63 papers with code

Best Arm Identification in Contaminated Stochastic Bandits

no code implementations NeurIPS 2021 Arpan Mukherjee, Ali Tajer, Pin-Yu Chen, Payel Das

Owing to the adversarial contamination of the rewards, each arm's mean is only partially identifiable.

Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines

no code implementations1 Dec 2021 Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao

To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.

Adversarial Robustness Data Augmentation

Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations

no code implementations NeurIPS 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Adversarial Robustness Model Compression

Pessimistic Model Selection for Offline Deep Reinforcement Learning

no code implementations29 Nov 2021 Chao-Han Huck Yang, Zhengling Qi, Yifan Cui, Pin-Yu Chen

Deep Reinforcement Learning (DRL) has demonstrated great potentials in solving sequential decision making problems in many applications.

Decision Making Model Selection

Make an Omelette with Breaking Eggs: Zero-Shot Learning for Novel Attribute Synthesis

no code implementations28 Nov 2021 Yu Hsuan Li, Tzu-Yin Chao, Ching-Chun Huang, Pin-Yu Chen, Wei-Chen Chiu

Most of the existing algorithms for zero-shot classification problems typically rely on the attribute-based semantic relations among categories to realize the classification of novel categories without observing any of their instances.

Classification Zero-Shot Learning

Meta Adversarial Perturbations

no code implementations19 Nov 2021 Chia-Hung Yuan, Pin-Yu Chen, Chia-Mu Yu

A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack.

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

1 code implementation26 Oct 2021 Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen

We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE).

Federated Learning

How and When Adversarial Robustness Transfers in Knowledge Distillation?

no code implementations22 Oct 2021 Rulin Shao, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Our comprehensive analysis shows several novel insights that (1) With KDIGA, students can preserve or even exceed the adversarial robustness of the teacher model, even when their models have fundamentally different architectures; (2) KDIGA enables robustness to transfer to pre-trained students, such as KD from an adversarially trained ResNet to a pre-trained ViT, without loss of clean accuracy; and (3) Our derived local linearity bounds for characterizing adversarial robustness in KD are consistent with the empirical results.

Adversarial Robustness Knowledge Distillation +1

Robust Event Classification Using Imperfect Real-world PMU Data

no code implementations19 Oct 2021 Yunchuan Liu, Lei Yang, Amir Ghasemkhani, Hanif Livani, Virgilio A. Centeno, Pin-Yu Chen, Junshan Zhang

Specifically, the data preprocessing step addresses the data quality issues of PMU measurements (e. g., bad data and missing data); in the fine-grained event data extraction step, a model-free event detection method is developed to accurately localize the events from the inaccurate event timestamps in the event logs; and the feature engineering step constructs the event features based on the patterns of different event types, in order to improve the performance and the interpretability of the event classifiers.

Classification Event Detection +1

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

no code implementations12 Oct 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.

A Study of Low-Resource Speech Commands Recognition based on Adversarial Reprogramming

1 code implementation8 Oct 2021 Hao Yen, Pin-Jui Ku, Chao-Han Huck Yang, Hu Hu, Sabato Marco Siniscalchi, Pin-Yu Chen, Yu Tsao

In this study, we propose a novel adversarial reprogramming (AR) approach for low-resource spoken command recognition (SCR), and build an AR-SCR system.

Transfer Learning

QTN-VQC: An End-to-End Learning framework for Quantum Neural Networks

no code implementations6 Oct 2021 Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen

The advent of noisy intermediate-scale quantum (NISQ) computers raises a crucial challenge to design quantum neural networks for fully quantum learning tasks.

Real-World Adversarial Examples involving Makeup Application

no code implementations4 Sep 2021 Chang-Sheng Lin, Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu

The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16.

Adversarial Attack Face Recognition +1

Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

1 code implementation NeurIPS 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.

Data Poisoning Domain Generalization +1

MAML is a Noisy Contrastive Learner

no code implementations29 Jun 2021 Chia-Hsiang Kao, Wei-Chen Chiu, Pin-Yu Chen

Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays, which achieves remarkable success in various learning problems.

Meta-Learning

Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design

1 code implementation24 Jun 2021 Yue Cao, Payel Das, Vijil Chenthamarakshan, Pin-Yu Chen, Igor Melnyk, Yang shen

Designing novel protein sequences for a desired 3D topological fold is a fundamental yet non-trivial task in protein engineering.

Generalizing Adversarial Training to Composite Semantic Perturbations

no code implementations ICML Workshop AML 2021 Yun-Yun Tsai, Lei Hsiung, Pin-Yu Chen, Tsung-Yi Ho

We then propose generalized adversarial training (GAT) to extend model robustness from $\ell_{p}$ norm to composite semantic perturbations, such as Hue, Saturation, Brightness, Contrast, and Rotation.

CRFL: Certifiably Robust Federated Learning against Backdoor Attacks

1 code implementation15 Jun 2021 Chulin Xie, Minghao Chen, Pin-Yu Chen, Bo Li

Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.

Federated Learning

Predicting Deep Neural Network Generalization with Perturbation Response Curves

no code implementations NeurIPS 2021 Yair Schiff, Brian Quanz, Payel Das, Pin-Yu Chen

However, despite these successes, the recent Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition suggests that there is a need for more robust and efficient measures of network generalization.

Simple Transparent Adversarial Examples

no code implementations20 May 2021 Jaydeep Borkar, Pin-Yu Chen

We propose two new aspects of adversarial image generation methods and evaluate them on the robustness of Google Cloud Vision API's optical character recognition service and object detection APIs deployed in real-world settings such as sightengine. com, picpurify. com, Google Cloud Vision API, and Microsoft Azure's Computer Vision API.

Image Generation Object Detection +1

Vision Transformers are Robust Learners

1 code implementation17 May 2021 Sayak Paul, Pin-Yu Chen

Transformers, composed of multiple self-attention layers, hold strong promises toward a generic learning primitive applicable to different data modalities, including the recent breakthroughs in computer vision achieving state-of-the-art (SOTA) standard accuracy with better parameter efficiency.

Anomaly Detection Image Classification +1

High-Robustness, Low-Transferability Fingerprinting of Neural Networks

no code implementations14 May 2021 Siyue Wang, Xiao Wang, Pin-Yu Chen, Pu Zhao, Xue Lin

This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks, featuring high-robustness to the base model against model pruning as well as low-transferability to unassociated models.

Gi and Pal Scores: Deep Neural Network Generalization Statistics

no code implementations8 Apr 2021 Yair Schiff, Brian Quanz, Payel Das, Pin-Yu Chen

The field of Deep Learning is rich with empirical evidence of human-like performance on a variety of regression, classification, and control tasks.

Towards creativity characterization of generative models via group-based subset scanning

no code implementations1 Apr 2021 Celia Cintas, Payel Das, Brian Quanz, Skyler Speakman, Victor Akinwande, Pin-Yu Chen

We propose group-based subset scanning to quantify, detect, and characterize creative processes by detecting a subset of anomalous node-activations in the hidden layers of generative models.

On the Adversarial Robustness of Vision Transformers

1 code implementation29 Mar 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations.

Adversarial Robustness

Don't Forget to Sign the Gradients!

1 code implementation5 Mar 2021 Omid Aramoon, Pin-Yu Chen, Gang Qu

Engineering a top-notch deep learning model is an expensive procedure that involves collecting data, hiring human resources with expertise in machine learning, and providing high computational resources.

Image Classification

Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples

no code implementations4 Mar 2021 Washington Garcia, Pin-Yu Chen, Somesh Jha, Scott Clouse, Kevin R. B. Butler

It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors.

Dimensionality Reduction Image Classification

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

no code implementations3 Mar 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks.

Model Compression

Adversarial Examples for Unsupervised Machine Learning Models

1 code implementation2 Mar 2021 Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu

In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.

Contrastive Learning Data Augmentation +1

Domain Adaptation for Learning Generator from Paired Few-Shot Data

no code implementations25 Feb 2021 Chun-Chih Teng, Pin-Yu Chen, Wei-Chen Chiu

We propose a Paired Few-shot GAN (PFS-GAN) model for learning generators with sufficient source data and a few target data.

Domain Adaptation Few-Shot Learning

Non-Singular Adversarial Robustness of Neural Networks

no code implementations23 Feb 2021 Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen

In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.

Adversarial Robustness

On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning

1 code implementation ICLR 2021 Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang

Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.

Adversarial Attack Adversarial Robustness +4

Causal Inference Q-Network: Toward Resilient Reinforcement Learning

no code implementations18 Feb 2021 Chao-Han Huck Yang, I-Te Danny Hung, Yi Ouyang, Pin-Yu Chen

Deep reinforcement learning (DRL) has demonstrated impressive performance in various gaming simulators and real-world applications.

Causal Inference

Meta Federated Learning

no code implementations10 Feb 2021 Omid Aramoon, Pin-Yu Chen, Gang Qu, Yuan Tian

Due to its distributed methodology alongside its privacy-preserving features, Federated Learning (FL) is vulnerable to training time adversarial attacks.

Federated Learning

Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning

no code implementations1 Feb 2021 Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, Feng Yan

Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.

Federated Learning

Fast Training of Provably Robust Neural Networks by SingleProp

no code implementations1 Feb 2021 Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Luca Daniel

Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees.

Fake it Till You Make it: Self-Supervised Semantic Shifts for Monolingual Word Embedding Tasks

no code implementations30 Jan 2021 Maurício Gruppi, Sibel Adali, Pin-Yu Chen

The goal of LSC is to characterize and quantify language variations with respect to word meaning, to measure how distinct two language sources are (that is, people or language models).

Adversarial Sample Enhanced Domain Adaptation: A Case Study on Predictive Modeling with Electronic Health Records

no code implementations13 Jan 2021 Yiqin Yu, Pin-Yu Chen, Yuan Zhou, Jing Mei

With the successful adoption of machine learning on electronic health records (EHRs), numerous computational models have been deployed to address a variety of clinical problems.

Data Augmentation Domain Adaptation

Robust Text CAPTCHAs Using Adversarial Examples

no code implementations7 Jan 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

At the second stage, we design and apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.

Adversarial Attack Optical Character Recognition

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks

no code implementations NeurIPS 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.

ProGAE: A Geometric Autoencoder-based Generative Model for Disentangling Protein Dynamics

no code implementations1 Jan 2021 Norman Joseph Tatro, Payel Das, Pin-Yu Chen, Vijil Chenthamarakshan, Rongjie Lai

Empowered by the disentangled latent space learning, the extrinsic latent embedding is successfully used for classification or property prediction of different drugs bound to a specific protein.

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

no code implementations1 Jan 2021 Gaoyuan Zhang, Songtao Lu, Sijia Liu, Xiangyi Chen, Pin-Yu Chen, Lee Martie, Lior Horesh, Mingyi Hong

Current deep neural networks are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.

Quantization

CAFE: Catastrophic Data Leakage in Federated Learning

no code implementations1 Jan 2021 Xiao Jin, Ruijie Du, Pin-Yu Chen, Tianyi Chen

In this paper, we revisit this defense premise and propose an advanced data leakage attack to efficiently recover batch data from the shared aggregated gradients.

Federated Learning

Self-Progressing Robust Training

1 code implementation22 Dec 2020 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems.

Adversarial Robustness

Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework

no code implementations21 Dec 2020 Pranay Sharma, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Xue Lin, Pramod K. Varshney

In this work, we focus on the study of stochastic zeroth-order (ZO) optimization which does not require first-order gradient information and uses only function evaluations.

Reprogramming Language Models for Molecular Representation Learning

no code implementations7 Dec 2020 Ria Vinod, Pin-Yu Chen, Payel Das

Recent advancements in transfer learning have made it a promising approach for domain adaptation via transfer of learned representations.

Dictionary Learning Domain Adaptation +2

How Robust are Randomized Smoothing based Defenses to Data Poisoning?

1 code implementation CVPR 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.

Adversarial Robustness bilevel optimization +2

SChME at SemEval-2020 Task 1: A Model Ensemble for Detecting Lexical Semantic Change

1 code implementation SEMEVAL 2020 Maurício Gruppi, Sibel Adali, Pin-Yu Chen

Our results show evidence that the number of landmarks used for alignment has a directimpact on the predictive performance of the model.

Word Embeddings

Optimizing Molecules using Efficient Queries from Property Evaluations

1 code implementation3 Nov 2020 Samuel Hoffman, Vijil Chenthamarakshan, Kahini Wadhawan, Pin-Yu Chen, Payel Das

Machine learning based methods have shown potential for optimizing existing molecules with more desirable properties, a critical step towards accelerating new chemical discovery.

Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition

2 code implementations26 Oct 2020 Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Pin-Yu Chen, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95. 12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features.

 Ranked #1 on Keyword Spotting on Google Speech Commands (10-keyword Speech Commands dataset metric)

automatic-speech-recognition Federated Learning +2

Higher-Order Certification for Randomized Smoothing

no code implementations NeurIPS 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

We also provide a framework that generalizes the calculation for certification using higher-order information.

Optimizing Mode Connectivity via Neuron Alignment

1 code implementation NeurIPS 2020 N. Joseph Tatro, Pin-Yu Chen, Payel Das, Igor Melnyk, Prasanna Sattigeri, Rongjie Lai

Yet, current curve finding algorithms do not consider the influence of symmetry in the loss surface created by model weight permutations.

Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

1 code implementation ECCV 2020 Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, JinJun Xiong, Meng Wang

When the training data are maliciously tampered, the predictions of the acquired deep neural network (DNN) can be manipulated by an adversary known as the Trojan attack (or poisoning backdoor attack).

Proper Network Interpretability Helps Adversarial Robustness in Classification

1 code implementation ICML 2020 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness Classification +2

Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case

no code implementations ICML 2020 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.

General Classification

A Dynamical Systems Approach for Convergence of the Bayesian EM Algorithm

no code implementations23 Jun 2020 Orlando Romero, Subhro Das, Pin-Yu Chen, Sérgio Pequito

Out of the recent advances in systems and control (S\&C)-based analysis of optimization algorithms, not enough work has been specifically dedicated to machine learning (ML) algorithms and its applications.

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning

no code implementations11 Jun 2020 Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney

Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.

DBA: Distributed Backdoor Attacks against Federated Learning

2 code implementations ICLR 2020 Chulin Xie, Keli Huang, Pin-Yu Chen, Bo Li

Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.

Feature Importance Federated Learning

Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

2 code implementations ICLR 2020 Pu Zhao, Pin-Yu Chen, Payel Das, Karthikeyan Natesan Ramamurthy, Xue Lin

In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness.

Adversarial Robustness

Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement

no code implementations31 Mar 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Chin-Hui Lee

Recent studies have highlighted adversarial examples as ubiquitous threats to the deep neural network (DNN) based speech recognition systems.

automatic-speech-recognition Data Augmentation +3

Hidden Cost of Randomized Smoothing

no code implementations2 Mar 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei, Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public.

Defending against Backdoor Attack on Deep Neural Networks

no code implementations26 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin

Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks.

Data Poisoning

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

no code implementations25 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin

To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.

Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning

no code implementations20 Feb 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Yi Ouyang, I-Te Danny Hung, Chin-Hui Lee, Xiaoli Ma

Recent deep neural networks based techniques, especially those equipped with the ability of self-adaptation in the system level such as deep reinforcement learning (DRL), are shown to possess many advantages of optimizing robot learning systems (e. g., autonomous navigation and continuous robot arm control.)

Autonomous Navigation

AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks

no code implementations19 Feb 2020 Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin

Designing effective defense against adversarial attacks is a crucial topic as deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars.

Malware Detection Self-Driving Cars

Block Switching: A Stochastic Approach for Deep Learning Security

no code implementations18 Feb 2020 Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin

Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models.

Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent

1 code implementation18 Feb 2020 Pu Zhao, Pin-Yu Chen, Siyue Wang, Xue Lin

Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability.

Adversarial Attack Image Classification

CAT: Customized Adversarial Training for Improved Robustness

no code implementations17 Feb 2020 Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh

Adversarial training has become one of the most effective methods for improving robustness of neural networks.

Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States

1 code implementation9 Feb 2020 Yunan Ye, Hengzhi Pei, Boxin Wang, Pin-Yu Chen, Yada Zhu, Jun Xiao, Bo Li

Our framework aims to address two unique challenges in financial PM: (1) data heterogeneity -- the collected information for each asset is usually diverse, noisy and imbalanced (e. g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

1 code implementation19 Dec 2019 Jeet Mohapatra, Tsui-Wei, Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task.

Image Classification

Adversarial T-shirt! Evading Person Detectors in A Physical World

1 code implementation ECCV 2020 Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, Xue Lin

To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts.

Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing

no code implementations ICML 2020 Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush R. Varshney

Moreover, the same classifier yields the lack of a trade-off with respect to ideal distributions while yielding a trade-off when accuracy is measured with respect to the given (possibly biased) dataset.

Fairness Two-sample testing

Efficient Training of Robust and Verifiable Neural Networks

no code implementations25 Sep 2019 Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

We propose that many common certified defenses can be viewed under a unified framework of regularization.

Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness

no code implementations25 Sep 2019 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.

Adversarial Attack Adversarial Robustness

Visual Interpretability Alone Helps Adversarial Robustness

no code implementations25 Sep 2019 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness

SPROUT: Self-Progressing Robust Training

no code implementations25 Sep 2019 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems.

Adversarial Robustness

Optimizing Loss Landscape Connectivity via Neuron Alignment

no code implementations25 Sep 2019 N. Joseph Tatro, Pin-Yu Chen, Payel Das, Igor Melnyk, Prasanna Sattigeri, Rongjie Lai

Empirically, this initialization is critical for efficiently learning a simple, planar, low-loss curve between networks that successfully generalizes.

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

1 code implementation ICLR 2020 Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh

We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.

Adversarial Attack Adversarial Robustness +1

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

1 code implementation20 Aug 2019 Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin

However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e. g., large drop in test accuracy.

Adversarial Robustness

Reinforcement Learning based Interconnection Routing for Adaptive Traffic Optimization

2 code implementations13 Aug 2019 Sheng-Chun Kao, Chao-Han Huck Yang, Pin-Yu Chen, Xiaoli Ma, Tushar Krishna

In this work, we demonstrate the promise of applying reinforcement learning (RL) to optimize NoC runtime performance.

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

1 code implementation ICCV 2019 Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.

Adversarial Attack Image Classification

Variational Quantum Circuits for Deep Reinforcement Learning

1 code implementation30 Jun 2019 Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Hsi-Sheng Goan

To the best of our knowledge, this work is the first proof-of-principle demonstration of variational quantum circuits to approximate the deep $Q$-value function for decision-making and policy-selection reinforcement learning with experience replay and target network.

Decision Making Quantum Machine Learning

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

1 code implementation10 Jun 2019 Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification.

Adversarial Robustness Classification +2

Adversarial Attack Generation Empowered by Min-Max Optimization

1 code implementation NeurIPS 2021 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

In this paper, we show how a general framework of min-max optimization over multiple domains can be leveraged to advance the design of different types of adversarial attacks.

Adversarial Attack Adversarial Robustness

Model Agnostic Contrastive Explanations for Structured Data

no code implementations31 May 2019 Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, Ruchir Puri

Recently, a method [7] was proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model.

Leveraging Latent Features for Local Explanations

3 code implementations29 May 2019 Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Yunfeng Zhang, Karthikeyan Shanmugam, Chun-Chen Tu

As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level.

General Classification Self-Driving Cars

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

no code implementations ICLR 2019 Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

signSGD via Zeroth-Order Oracle

no code implementations ICLR 2019 Sijia Liu, Pin-Yu Chen, Xiangyi Chen, Mingyi Hong

Our study shows that ZO signSGD requires $\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\sqrt{d}/\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.

Image Classification Stochastic Optimization

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

1 code implementation9 Feb 2019 Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Xiaoli Ma, Yi-Chang James Tsai

To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation.

Causal Inference Visual Reasoning

Toward A Neuro-inspired Creative Decoder

no code implementations6 Feb 2019 Payel Das, Brian Quanz, Pin-Yu Chen, Jae-wook Ahn, Dhruv Shah

Creativity, a process that generates novel and meaningful ideas, involves increased association between task-positive (control) and task-negative (default) networks in the human brain.

PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach

no code implementations18 Dec 2018 Tsui-Wei Weng, Pin-Yu Chen, Lam M. Nguyen, Mark S. Squillante, Ivan Oseledets, Luca Daniel

With deep neural networks providing state-of-the-art machine learning models for numerous machine learning tasks, quantifying the robustness of these models has become an important area of research.

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

2 code implementations29 Nov 2018 Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks.

Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

1 code implementation14 Nov 2018 Rise Ooi, Chao-Han Huck Yang, Pin-Yu Chen, Vìctor Eguìluz, Narsis Kiani, Hector Zenil, David Gomez-Cabrero, Jesper Tegnèr

Next, (2) the learned networks are technically controllable as only a small number of driver nodes are required to move the system to a new state.

Transfer Learning

Efficient Neural Network Robustness Certification with General Activation Functions

13 code implementations NeurIPS 2018 Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.

Word Mover's Embedding: From Word2Vec to Document Embedding

1 code implementation EMNLP 2018 Lingfei Wu, Ian E. H. Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, Michael J. Witbrock

While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings.

Classification Document Embedding +4

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

Characterizing Audio Adversarial Examples Using Temporal Dependency

no code implementations ICLR 2019 Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song

In particular, our results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples.

Adversarial Defense automatic-speech-recognition +1

Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR

no code implementations24 Sep 2018 Pin-Yu Chen, Bhanukiran Vinzamuri, Sijia Liu

Many state-of-the-art machine learning models such as deep neural networks have recently shown to be vulnerable to adversarial perturbations, especially in classification tasks.

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

no code implementations24 Sep 2018 Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu

Recent studies have found that deep learning systems are vulnerable to adversarial examples; e. g., visually unrecognizable adversarial images can easily be crafted to result in misclassification.

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

2 code implementations ECCV 2018 Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao

The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.

General Classification Image Classification

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

1 code implementation ICLR 2019 Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, huan zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin

When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.

Adversarial Attack

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

1 code implementation12 Jul 2018 Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh

We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications

1 code implementation30 May 2018 Pin-Yu Chen, Lingfei Wu, Sijia Liu, Indika Rajapakse

The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence.

Anomaly Detection Graph Similarity

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

1 code implementation30 May 2018 Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.

Adversarial Robustness

Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization

1 code implementation NeurIPS 2018 Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Pai-Shun Ting, Shiyu Chang, Lisa Amini

As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying.

Material Classification Stochastic Optimization

Neural-Brane: Neural Bayesian Personalized Ranking for Attributed Network Embedding

1 code implementation23 Apr 2018 Vachik S. Dave, Baichuan Zhang, Pin-Yu Chen, Mohammad Al Hasan

For a given network, Neural-Brane extracts latent feature representation of its vertices using a designed neural network model that unifies network topological information and nodal attributes; Besides, it utilizes Bayesian personalized ranking objective, which exploits the proximity ordering between a similar node-pair and a dissimilar node-pair.

Community Detection General Classification +3

On the Limitation of MagNet Defense against $L_1$-based Adversarial Examples

1 code implementation14 Apr 2018 Pei-Hsuan Lu, Pin-Yu Chen, Kang-Cheng Chen, Chia-Mu Yu

In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security.

On the Supermodularity of Active Graph-based Semi-supervised Learning with Stieltjes Matrix Regularization

no code implementations9 Apr 2018 Pin-Yu Chen, Dennis Wei

Active graph-based semi-supervised learning (AG-SSL) aims to select a small set of labeled examples and utilize their graph-based relation to other unlabeled examples to aid in machine learning tasks.

Community Detection General Classification

Bypassing Feature Squeezing by Increasing Adversary Strength

no code implementations27 Mar 2018 Yash Sharma, Pin-Yu Chen

Feature Squeezing is a recently proposed defense method which reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample.

On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples

1 code implementation26 Mar 2018 Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu

Understanding and characterizing the subspaces of adversarial examples aid in studying the robustness of deep neural networks (DNNs) to adversarial perturbations.

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

1 code implementation3 Mar 2018 Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh

In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.

Image Classification Machine Translation +2

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

Incremental Eigenpair Computation for Graph Laplacian Matrices: Theory and Applications

no code implementations13 Dec 2017 Pin-Yu Chen, Baichuan Zhang, Mohammad Al Hasan

The smallest eigenvalues and the associated eigenvectors (i. e., eigenpairs) of a graph Laplacian matrix have been widely used in spectral clustering and community detection.

Community Detection

Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

2 code implementations ACL 2018 Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh

Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.

Image Captioning

Attacking the Madry Defense Model with $L_1$-based Adversarial Examples

no code implementations30 Oct 2017 Yash Sharma, Pin-Yu Chen

The Madry Lab recently hosted a competition designed to test the robustness of their adversarially trained MNIST model.

Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications

no code implementations21 Oct 2017 Sijia Liu, Jie Chen, Pin-Yu Chen, Alfred O. Hero

In this paper, we design and analyze a new zeroth-order online algorithm, namely, the zeroth-order online alternating direction method of multipliers (ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers.

Revisiting Spectral Graph Clustering with Generative Community Models

no code implementations14 Sep 2017 Pin-Yu Chen, Lingfei Wu

The presented method, SGC-GEN, not only considers the detection error caused by the corresponding model mismatch to a given graph, but also yields a theoretical guarantee on community detectability by analyzing Spectral Graph Clustering (SGC) under GENerative community models (GCMs).

Community Detection Graph Clustering +1

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

1 code implementation13 Sep 2017 Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.

Principled Multilayer Network Embedding

1 code implementation11 Sep 2017 Weiyi Liu, Pin-Yu Chen, Sailung Yeung, Toyotaro Suzumura, Lingli Chen

Multilayer network analysis has become a vital tool for understanding different relationships and their interactions in a complex system, where each layer in a multilayer network depicts the topological structure of a group of nodes corresponding to a particular relationship.

Social and Information Networks Physics and Society

Learning Graph Topological Features via GAN

no code implementations11 Sep 2017 Weiyi Liu, Hal Cooper, Min Hwan Oh, Sailung Yeung, Pin-Yu Chen, Toyotaro Suzumura, Lingli Chen

Inspired by the generation power of generative adversarial networks (GANs) in image domains, we introduce a novel hierarchical architecture for learning characteristic topological features from a single arbitrary input graph via GANs.

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

5 code implementations14 Aug 2017 Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

Adversarial Attack Adversarial Defense +3

Multilayer Spectral Graph Clustering via Convex Layer Aggregation: Theory and Algorithms

no code implementations8 Aug 2017 Pin-Yu Chen, Alfred O. Hero

Multilayer graphs are commonly used for representing different relations between entities and handling heterogeneous data processing tasks.

Graph Clustering Spectral Graph Clustering

Bias-Variance Tradeoff of Graph Laplacian Regularizer

no code implementations2 Jun 2017 Pin-Yu Chen, Sijia Liu

This paper presents a bias-variance tradeoff of graph Laplacian regularizer, which is widely used in graph signal processing and semi-supervised learning tasks.

Accelerated Distributed Dual Averaging over Evolving Networks of Growing Connectivity

no code implementations18 Apr 2017 Sijia Liu, Pin-Yu Chen, Alfred O. Hero

Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis.

Distributed Optimization

FEAST: An Automated Feature Selection Framework for Compilation Tasks

no code implementations29 Oct 2016 Pai-Shun Ting, Chun-Chen Tu, Pin-Yu Chen, Ya-Yun Lo, Shin-Ming Cheng

In this paper, we propose FEAture Selection for compilation Tasks (FEAST), an efficient and automated framework for determining the most relevant and representative features from a feature pool.

Feature Selection

Multilayer Spectral Graph Clustering via Convex Layer Aggregation

no code implementations23 Sep 2016 Pin-Yu Chen, Alfred O. Hero III

Multilayer graphs are commonly used for representing different relations between entities and handling heterogeneous data processing tasks.

Graph Clustering Spectral Graph Clustering

AMOS: An Automated Model Order Selection Algorithm for Spectral Graph Clustering

1 code implementation21 Sep 2016 Pin-Yu Chen, Thibaut Gensollen, Alfred O. Hero III

One of the longstanding problems in spectral graph clustering (SGC) is the so-called model order selection problem: automated selection of the correct number of clusters.

Graph Clustering Spectral Graph Clustering

Phase Transitions and a Model Order Selection Criterion for Spectral Graph Clustering

1 code implementation11 Apr 2016 Pin-Yu Chen, Alfred O. Hero

One of the longstanding open problems in spectral graph clustering (SGC) is the so-called model order selection problem: automated selection of the correct number of clusters.

Graph Clustering Model Selection +1

Multi-centrality Graph Spectral Decompositions and their Application to Cyber Intrusion Detection

no code implementations23 Dec 2015 Pin-Yu Chen, Sutanay Choudhury, Alfred O. Hero

Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful.

Dictionary Learning Intrusion Detection

Incremental Method for Spectral Clustering of Increasing Orders

no code implementations23 Dec 2015 Pin-Yu Chen, Baichuan Zhang, Mohammad Al Hasan, Alfred O. Hero

The smallest eigenvalues and the associated eigenvectors (i. e., eigenpairs) of a graph Laplacian matrix have been widely used for spectral clustering and community detection.

Community Detection

When Crowdsourcing Meets Mobile Sensing: A Social Network Perspective

no code implementations3 Aug 2015 Pin-Yu Chen, Shin-Ming Cheng, Pai-Shun Ting, Chia-Wei Lien, Fu-Jen Chu

Mobile sensing is an emerging technology that utilizes agent-participatory data for decision making or state estimation, including multimedia applications.

Decision Making

Supervised Collective Classification for Crowdsourcing

no code implementations23 Jul 2015 Pin-Yu Chen, Chia-Wei Lien, Fu-Jen Chu, Pai-Shun Ting, Shin-Ming Cheng

Crowdsourcing utilizes the wisdom of crowds for collective classification via information (e. g., labels of an item) provided by labelers.

Classification General Classification

Phase Transitions in Spectral Community Detection of Large Noisy Networks

no code implementations9 Apr 2015 Pin-Yu Chen, Alfred O. Hero III

We prove phase transitions in community detectability as a function of the external edge connection probability and the noisy edge presence probability under a general network model where two arbitrarily connected communities are interconnected by random external edges.

Community Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.