Search Results for author: Furong Huang

Found 65 papers, 22 papers with code

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness

1 code implementation6 Feb 2023 Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein, Furong Huang

However, it is unclear whether existing robust training methods effectively increase the margin for each vulnerable point during training.

Adversarial Robustness

Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function

no code implementations2 Feb 2023 Ruijie Zheng, Xiyao Wang, Huazhe Xu, Furong Huang

To test this hypothesis, we devise two practical robust training mechanisms through computing the adversarial noise and regularizing the value network's spectral norm to directly regularize the Lipschitz condition of the value functions.

Model-based Reinforcement Learning

SMART: Self-supervised Multi-task pretrAining with contRol Transformers

no code implementations24 Jan 2023 Yanchao Sun, Shuang Ma, Ratnesh Madaan, Rogerio Bonatti, Furong Huang, Ashish Kapoor

Self-supervised pretraining has been extensively studied in language and vision domains, where a unified model can be easily adapted to various downstream tasks by pretraining representations without explicit labels.

Imitation Learning Reinforcement Learning (RL)

Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach

1 code implementation2 Nov 2022 Kaiwen Yang, Yanchao Sun, Jiahao Su, Fengxiang He, Xinmei Tian, Furong Huang, Tianyi Zhou, DaCheng Tao

In experiments, we show that our method consistently brings non-trivial improvements to the three aforementioned learning tasks from both efficiency and final performance, either or not combined with strong pre-defined augmentations, e. g., on medical images when domain knowledge is unavailable and the existing augmentation techniques perform poorly.

Data Augmentation Representation Learning

SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication

no code implementations25 Oct 2022 Marco Bornstein, Tahseen Rabbani, Evan Wang, Amrit Singh Bedi, Furong Huang

Furthermore, we provide theoretical results for IID and non-IID settings without any bounded-delay assumption for slow clients which is required by other asynchronous decentralized FL algorithms.

Federated Learning Image Classification

Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning

1 code implementation12 Oct 2022 Yongyuan Liang, Yanchao Sun, Ruijie Zheng, Furong Huang

Recent studies reveal that a well-trained deep reinforcement learning (RL) policy can be particularly vulnerable to adversarial perturbations on input observations.

reinforcement-learning Reinforcement Learning (RL)

An Energy Optimized Specializing DAG Federated Learning based on Event Triggered Communication

no code implementations26 Sep 2022 Xiaofeng Xue, Haokun Mao, Qiong Li, Furong Huang

Specializing Directed Acyclic Graph Federated Learning(SDAGFL) is a new federated learning framework which updates model from the devices with similar data distribution through Directed Acyclic Graph Distributed Ledger Technology (DAG-DLT).

Federated Learning

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise

3 code implementations19 Aug 2022 Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S. Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, Tom Goldstein

We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice.

Image Restoration Variational Inference

Live in the Moment: Learning Dynamics Model Adapted to Evolving Policy

no code implementations25 Jul 2022 Xiyao Wang, Wichayaporn Wongkamjan, Ruonan Jia, Furong Huang

Model-based reinforcement learning (RL) often achieves higher sample efficiency in practice than model-free RL by learning a dynamics model to generate samples for policy learning.

Continuous Control Model-based Reinforcement Learning +1

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

1 code implementation26 Jun 2022 Bang An, Zora Che, Mucong Ding, Furong Huang

In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse.

Fairness

FedBC: Calibrating Global and Local Models via Federated Learning Beyond Consensus

no code implementations22 Jun 2022 Amrit Singh Bedi, Chen Fan, Alec Koppel, Anit Kumar Sahu, Brian M. Sadler, Furong Huang, Dinesh Manocha

In this work, we quantitatively calibrate the performance of global and local models in federated learning through a multi-criterion optimization-based framework, which we cast as a constrained program.

Federated Learning

Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems

no code implementations21 Jun 2022 Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang

Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.

Multi-agent Reinforcement Learning

Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning

no code implementations2 Jun 2022 Souradip Chakraborty, Amrit Singh Bedi, Alec Koppel, Brian M. Sadler, Furong Huang, Pratap Tokekar, Dinesh Manocha

In this work, we propose a novel ${\bf K}$ernelized ${\bf S}$tein Discrepancy-based Posterior Sampling for ${\bf RL}$ algorithm (named $\texttt{KSRL}$) which extends model-based RL based upon posterior sampling (PSRL) in several ways: we (i) relax the need for any smoothness or Gaussian assumptions, allowing for complex mixture models; (ii) ensure it is applicable to large-scale training by incorporating a compression step such that the posterior consists of a \emph{Bayesian coreset} of only statistically significant past state-action pairs; and (iii) develop a novel regret analysis of PSRL based upon integral probability metrics, which, under a smoothness condition on the constructed posterior, can be evaluated in closed form as the kernelized Stein discrepancy (KSD).

Continuous Control Model-based Reinforcement Learning +2

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking

1 code implementation11 Feb 2022 Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein

Algorithmic extrapolation can be achieved through recurrent systems, which can be iterated many times to solve difficult reasoning problems.

Logical Reasoning

Transfer RL across Observation Feature Spaces via Model-Based Regularization

no code implementations ICLR 2022 Yanchao Sun, Ruijie Zheng, Xiyao Wang, Andrew Cohen, Furong Huang

In many reinforcement learning (RL) applications, the observation space is specified by human developers and restricted by physical realizations, and may thus be subject to dramatic changes over time (e. g. increased number of observable features).

Reinforcement Learning (RL)

Understanding the Generalization Benefit of Model Invariance from a Data Perspective

1 code implementation NeurIPS 2021 Sicheng Zhu, Bang An, Furong Huang

Based on this notion, we refine the generalization bound for invariant models and characterize the suitability of a set of data transformations by the sample covering number induced by transformations, i. e., the smallest size of its induced sample covers.

Generalization Bounds

Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework

no code implementations29 Sep 2021 Jiahao Su, Wonmin Byeon, Furong Huang

Some of these designs are not exactly orthogonal, while others only consider standard convolutional layers and propose specific classes of their realizations.

Reinforcement Learning under a Multi-agent Predictive State Representation Model: Method and Theory

no code implementations ICLR 2022 Zhi Zhang, Zhuoran Yang, Han Liu, Pratap Tokekar, Furong Huang

This paper proposes a new algorithm for learning the optimal policies under a novel multi-agent predictive state representation reinforcement learning model.

reinforcement-learning Reinforcement Learning (RL)

Tuformer: Data-Driven Design of Expressive Transformer by Tucker Tensor Representation

no code implementations ICLR 2022 Xiaoyu Liu, Jiahao Su, Furong Huang

Guided by tensor diagram representations, we formulate a design space where we can analyze the expressive power of the network structure, providing new directions and possibilities for enhanced performance.

A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs

no code implementations29 Sep 2021 Mucong Ding, Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, Tom Goldstein

We observe that in most cases, we need both a suitable domain generalization algorithm and a strong GNN backbone model to optimize out-of-distribution test performance.

Domain Generalization Graph Classification +1

Thinking Deeper With Recurrent Networks: Logical Extrapolation Without Overthinking

no code implementations29 Sep 2021 Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein

Classical machine learning systems perform best when they are trained and tested on the same distribution, and they lack a mechanism to increase model power after training is complete.

Comfetch: Federated Learning of Large Networks on Memory-Constrained Clients via Sketching

no code implementations17 Sep 2021 Tahseen Rabbani, Brandon Feng, Yifan Yang, Arjun Rajkumar, Amitabh Varshney, Furong Huang

A popular application of federated learning is using many clients to train a deep neural network, the parameters of which are maintained on a central server.

Federated Learning

Practical and Fast Momentum-Based Power Methods

no code implementations20 Aug 2021 Tahseen Rabbani, Apollo Jain, Arjun Rajkumar, Furong Huang

The power method is a classical algorithm with broad applications in machine learning tasks, including streaming PCA, spectral clustering, and low-rank matrix approximation.

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

1 code implementation3 Aug 2021 Roman Levin, Manli Shu, Eitan Borgnia, Furong Huang, Micah Goldblum, Tom Goldstein

We find that samples which cause similar parameters to malfunction are semantically similar.

Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders

no code implementations1 Aug 2021 Huimin Zeng, Jiahao Su, Furong Huang

Randomized Smoothing (RS), being one of few provable defenses, has been showing great effectiveness and scalability in terms of defending against $\ell_2$-norm adversarial perturbations.

Scaling-up Diverse Orthogonal Convolutional Networks with a Paraunitary Framework

no code implementations16 Jun 2021 Jiahao Su, Wonmin Byeon, Furong Huang

To address this problem, we propose a theoretical framework for orthogonal convolutional layers, which establishes the equivalence between various orthogonal convolutional layers in the spatial domain and the paraunitary systems in the spectral domain.

Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL

1 code implementation ICLR 2022 Yanchao Sun, Ruijie Zheng, Yongyuan Liang, Furong Huang

Existing works on adversarial RL either use heuristics-based methods that may not find the strongest adversary, or directly train an RL-based adversary by treating the agent as a part of the environment, which can find the optimal adversary but may become intractable in a large state space.

Reinforcement Learning (RL)

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks

1 code implementation NeurIPS 2021 Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, Tom Goldstein

In this work, we show that recurrent networks trained to solve simple problems with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.

Guided Hyperparameter Tuning Through Visualization and Inference

no code implementations24 May 2021 Hyekang Joo, Calvin Bao, Ishan Sen, Furong Huang, Leilani Battle

Moreover, an analysis on the variance in a selected performance metric in the context of the model hyperparameters shows the impact that certain hyperparameters have on the performance metric.

Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and Accuracy

no code implementations7 Mar 2021 Chen Chen, Kezhi Kong, Peihong Yu, Juan Luque, Tom Goldstein, Furong Huang

Randomized smoothing (RS) is an effective and scalable technique for constructing neural network classifiers that are certifiably robust to adversarial perturbations.

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

1 code implementation2 Mar 2021 Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein

The InstaHide method has recently been proposed as an alternative to DP training that leverages supposed privacy properties of the mixup augmentation, although without rigorous guarantees.

Data Poisoning

Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks

no code implementations24 Oct 2020 Huimin Zeng, Chen Zhu, Tom Goldstein, Furong Huang

Adversarial Training is proved to be an efficient method to defend against adversarial examples, being one of the few defenses that withstand strong attacks.

Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics

no code implementations ICLR 2021 Yanchao Sun, Da Huo, Furong Huang

Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm's vulnerabilities and cause failure of the learning.

Reinforcement Learning (RL)

MaxVA: Fast Adaptation of Step Sizes by Maximizing Observed Variance of Gradients

1 code implementation21 Jun 2020 Chen Zhu, Yu Cheng, Zhe Gan, Furong Huang, Jingjing Liu, Tom Goldstein

Adaptive gradient methods such as RMSProp and Adam use exponential moving estimate of the squared gradient to compute adaptive step sizes, achieving better convergence than SGD in face of noisy objectives.

Image Classification Machine Translation +3

Using Wavelets and Spectral Methods to Study Patterns in Image-Classification Datasets

1 code implementation17 Jun 2020 Roozbeh Yousefzadeh, Furong Huang

We show that each image can be written as the summation of a finite number of rank-1 patterns in the wavelet space, providing a low rank approximation that captures the structures and patterns essential for learning.

Adversarial Robustness General Classification +2

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

no code implementations22 Feb 2020 Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.

Convolutional Tensor-Train LSTM for Spatio-temporal Learning

2 code implementations NeurIPS 2020 Jiahao Su, Wonmin Byeon, Jean Kossaifi, Furong Huang, Jan Kautz, Animashree Anandkumar

Learning from spatio-temporal data has numerous applications such as human-behavior analysis, object tracking, video compression, and physics simulation. However, existing methods still perform poorly on challenging video tasks such as long-term forecasting.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Activity Recognition Video Compression +1

TempLe: Learning Template of Transitions for Sample Efficient Multi-task RL

no code implementations16 Feb 2020 Yanchao Sun, Xiangyu Yin, Furong Huang

Transferring knowledge among various environments is important to efficiently learn multiple tasks online.

ARMA Nets: Expanding Receptive Field for Dense Prediction

1 code implementation NeurIPS 2020 Jiahao Su, Shiqi Wang, Furong Huang

In this work, we propose to replace any traditional convolutional layer with an autoregressive moving-average (ARMA) layer, a novel module with an adjustable receptive field controlled by the learnable autoregressive coefficients.

Image Classification Semantic Segmentation +1

Understanding Generalization in Deep Learning via Tensor Methods

no code implementations14 Jan 2020 Jingling Li, Yanchao Sun, Jiahao Su, Taiji Suzuki, Furong Huang

Recently proposed complexity measures have provided insights to understanding the generalizability in neural networks from perspectives of PAC-Bayes, robustness, overparametrization, compression and so on.

Can Agents Learn by Analogy? An Inferable Model for PAC Reinforcement Learning

1 code implementation21 Dec 2019 Yanchao Sun, Furong Huang

We propose a new model-based method called Greedy Inference Model (GIM) that infers the unknown dynamics from known dynamics based on the internal spectral properties of the environment.

Model-based Reinforcement Learning reinforcement-learning +1

Sampling-Free Learning of Bayesian Quantized Neural Networks

no code implementations ICLR 2020 Jiahao Su, Milan Cvitkovic, Furong Huang

Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important.

Image Classification

Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?

no code implementations25 Oct 2019 Ali Shafahi, Amin Ghiasi, Furong Huang, Tom Goldstein

Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization.

Adversarial Robustness

Improved Training of Certifiably Robust Models

no code implementations25 Sep 2019 Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical (PGD) robustness.

Convolutional Tensor-Train LSTM for Long-Term Video Prediction

no code implementations25 Sep 2019 Jiahao Su, Wonmin Byeon, Furong Huang, Jan Kautz, Animashree Anandkumar

Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames. Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations.

Video Prediction

Understanding Generalization through Visualizations

2 code implementations NeurIPS Workshop ICBINB 2020 W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, Tom Goldstein

The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive.

Tensorial Neural Networks: Generalization of Neural Networks and Application to Model Compression

no code implementations25 May 2018 Jiahao Su, Jingling Li, Bobby Bhattacharjee, Furong Huang

We propose tensorial neural networks (TNNs), a generalization of existing neural networks by extending tensor operations on low order operands to those on high order ones.

Model Compression Tensor Decomposition

Guaranteed Simultaneous Asymmetric Tensor Decomposition via Orthogonalized Alternating Least Squares

no code implementations25 May 2018 Furong Huang, Jialin Li, Xuchen You

We propose a Slicing Initialized Alternating Subspace Iteration (s-ASI) method that is guaranteed to recover top $r$ components ($\epsilon$-close) simultaneously for (a)symmetric tensors almost surely under the noiseless case (with high probability for a bounded noise) using $O(\log(\log \frac{1}{\epsilon}))$ steps of tensor subspace iterations.

Tensor Decomposition

An end-to-end Differentially Private Latent Dirichlet Allocation Using a Spectral Algorithm

no code implementations ICML 2020 Christopher DeCarolis, Mukul Ram, Seyed A. Esmaeili, Yu-Xiang Wang, Furong Huang

Overall, by combining the sensitivity and utility characterization, we obtain an end-to-end differentially private spectral algorithm for LDA and identify the corresponding configuration that outperforms others in any specific regime.

Variational Inference

Learning Deep ResNet Blocks Sequentially using Boosting Theory

no code implementations ICML 2018 Furong Huang, Jordan Ash, John Langford, Robert Schapire

We prove that the training error decays exponentially with the depth $T$ if the \emph{weak module classifiers} that we train perform slightly better than some weak baseline.

Non-negative Factorization of the Occurrence Tensor from Financial Contracts

1 code implementation10 Dec 2016 Zheng Xu, Furong Huang, Louiqa Raschid, Tom Goldstein

We propose an algorithm for the non-negative factorization of an occurrence tensor built from heterogeneous networks.

Unsupervised learning of transcriptional regulatory networks via latent tree graphical models

no code implementations20 Sep 2016 Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel, Animashree Anandkumar

We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity.

Unsupervised Learning of Word-Sequence Representations from Scratch via Convolutional Tensor Decomposition

no code implementations10 Jun 2016 Furong Huang, Animashree Anandkumar

More importantly, it is challenging for pre-trained models to obtain word-sequence embeddings that are universally good for all downstream tasks or for any new datasets.

Dictionary Learning Tensor Decomposition

Discovery of Latent Factors in High-dimensional Data Using Tensor Methods

no code implementations10 Jun 2016 Furong Huang

This thesis presents theoretical results on convergence to globally optimal solution of tensor decomposition using the stochastic gradient descent, despite non-convexity of the objective.

Dimensionality Reduction Stochastic Block Model +1

Discovering Neuronal Cell Types and Their Gene Expression Profiles Using a Spatial Point Process Mixture Model

no code implementations4 Feb 2016 Furong Huang, Animashree Anandkumar, Christian Borgs, Jennifer Chayes, Ernest Fraenkel, Michael Hawrylycz, Ed Lein, Alessandro Ingrosso, Srinivas Turaga

Single-cell RNA sequencing can now be used to measure the gene expression profiles of individual neurons and to categorize neurons based on their gene expression profiles.

Convolutional Dictionary Learning through Tensor Factorization

no code implementations10 Jun 2015 Furong Huang, Animashree Anandkumar

Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning.

Dictionary Learning Tensor Decomposition +1

Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition

1 code implementation6 Mar 2015 Rong Ge, Furong Huang, Chi Jin, Yang Yuan

To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points.

Tensor Decomposition

Guaranteed Scalable Learning of Latent Tree Models

no code implementations18 Jun 2014 Furong Huang, Niranjan U. N., Ioakeim Perros, Robert Chen, Jimeng Sun, Anima Anandkumar

We present an integrated approach for structure and parameter estimation in latent tree graphical models.

Online Tensor Methods for Learning Latent Variable Models

1 code implementation3 Sep 2013 Furong Huang, U. N. Niranjan, Mohammad Umar Hakeem, Animashree Anandkumar

We introduce an online tensor decomposition based approach for two latent variable modeling problems namely, (1) community detection, in which we learn the latent communities that the social actors in social networks belong to, and (2) topic modeling, in which we infer hidden topics of text articles.

Community Detection Tensor Decomposition

Learning Mixtures of Tree Graphical Models

no code implementations NeurIPS 2012 Anima Anandkumar, Daniel J. Hsu, Furong Huang, Sham M. Kakade

We consider unsupervised estimation of mixtures of discrete graphical models, where the class variable is hidden and each mixture component can have a potentially different Markov graph structure and parameters over the observed variables.

Cannot find the paper you are looking for? You can Submit a new open access paper.