Search Results for author: Hai Li

Found 96 papers, 40 papers with code

Do Counterfactual Examples Complicate Adversarial Training?

no code implementations16 Apr 2024 Eric Yeats, Cameron Darwin, Eduardo Ortega, Frank Liu, Hai Li

We leverage diffusion models to study the robustness-performance tradeoff of robust classifiers.

counterfactual

Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models

no code implementations3 Apr 2024 Jingyang Zhang, Jingwei Sun, Eric Yeats, Yang Ouyang, Martin Kuo, Jianyi Zhang, Hao Yang, Hai Li

The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination.

Vox-Fusion++: Voxel-based Neural Implicit Dense Tracking and Mapping with Multi-maps

no code implementations19 Mar 2024 Hongjia Zhai, Hai Li, Xingrui Yang, Gan Huang, Yuhang Ming, Hujun Bao, Guofeng Zhang

In this paper, we introduce Vox-Fusion++, a multi-maps-based robust dense tracking and mapping system that seamlessly fuses neural implicit representations with traditional volumetric fusion techniques.

Peeking Behind the Curtains of Residual Learning

no code implementations13 Feb 2024 Tunhou Zhang, Feng Yan, Hai Li, Yiran Chen

The utilization of residual learning has become widespread in deep and scalable neural nets.

Adversarial Estimation of Topological Dimension with Harmonic Score Maps

no code implementations11 Dec 2023 Eric Yeats, Cameron Darwin, Frank Liu, Hai Li

Quantification of the number of variables needed to locally explain complex data is often the first step to better understanding it.

SD-NAE: Generating Natural Adversarial Examples with Stable Diffusion

1 code implementation21 Nov 2023 Yueqian Lin, Jingyang Zhang, Yiran Chen, Hai Li

Natural Adversarial Examples (NAEs), images arising naturally from the environment and capable of deceiving classifiers, are instrumental in robustly evaluating and identifying vulnerabilities in trained models.

valid

DistDNAS: Search Efficient Feature Interactions within 2 Hours

no code implementations1 Nov 2023 Tunhou Zhang, Wei Wen, Igor Fedorov, Xi Liu, Buyun Zhang, Fangqiu Han, Wen-Yen Chen, Yiping Han, Feng Yan, Hai Li, Yiran Chen

To optimize search efficiency, DistDNAS distributes the search and aggregates the choice of optimal interaction modules on varying data dates, achieving over 25x speed-up and reducing search cost from 2 days to 2 hours.

Recommendation Systems

Farthest Greedy Path Sampling for Two-shot Recommender Search

no code implementations31 Oct 2023 Yufan Cao, Tunhou Zhang, Wei Wen, Feng Yan, Hai Li, Yiran Chen

FGPS enhances path diversity to facilitate more comprehensive supernet exploration, while emphasizing path quality to ensure the effective identification and utilization of promising architectures.

Click-Through Rate Prediction Neural Architecture Search

Depth Completion with Multiple Balanced Bases and Confidence for Dense Monocular SLAM

no code implementations8 Sep 2023 Weijian Xie, Guanyi Chu, Quanhao Qian, Yihao Yu, Hai Li, Danpeng Chen, Shangjin Zhai, Nan Wang, Hujun Bao, Guofeng Zhang

In this paper, we propose a novel method that integrates a light-weight depth completion network into a sparse SLAM system using a multi-basis depth representation, so that dense mapping can be performed online even on a mobile phone.

Depth Completion

SIO: Synthetic In-Distribution Data Benefits Out-of-Distribution Detection

1 code implementation25 Mar 2023 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley, Yiran Chen, Hai Li

Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training.

Out-of-Distribution Detection

MAPSeg: Unified Unsupervised Domain Adaptation for Heterogeneous Medical Image Segmentation Based on 3D Masked Autoencoding and Pseudo-Labeling

1 code implementation16 Mar 2023 Xuzhe Zhang, Yuhao Wu, Elsa Angelini, Ang Li, Jia Guo, Jerod M. Rasmussen, Thomas G. O'Connor, Pathik D. Wadhwa, Andrea Parolin Jackowski, Hai Li, Jonathan Posner, Andrew F. Laine, Yun Wang

In this study, we introduce Masked Autoencoding and Pseudo-Labeling Segmentation (MAPSeg), a $\textbf{unified}$ UDA framework with great versatility and superior performance for heterogeneous and volumetric medical image segmentation.

Domain Generalization Image Segmentation +5

Disentangling Learning Representations with Density Estimation

1 code implementation8 Feb 2023 Eric Yeats, Frank Liu, Hai Li

Disentangled learning representations have promising utility in many applications, but they currently suffer from serious reliability issues.

Density Estimation Disentanglement

GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

no code implementations31 Jan 2023 Xin Dong, Ruize Wu, Chao Xiong, Hai Li, Lei Cheng, Yong He, Shiyou Qian, Jian Cao, Linjian Mo

GDOD decomposes gradients into task-shared and task-conflict components explicitly and adopts a general update rule for avoiding interference across all task gradients.

Multi-Task Learning

HCE: Improving Performance and Efficiency with Heterogeneously Compressed Neural Network Ensemble

no code implementations18 Jan 2023 Jingchi Zhang, Huanrui Yang, Hai Li

We propose a new prespective on exploring the intrinsic diversity within a model architecture to build efficient DNN ensemble.

Ensemble Learning Model Compression +1

PIDS: Joint Point Interaction-Dimension Search for 3D Point Cloud

1 code implementation28 Nov 2022 Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen

In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data.

Neural Architecture Search Robust 3D Semantic Segmentation +1

More Generalized and Personalized Unsupervised Representation Learning In A Distributed System

no code implementations11 Nov 2022 Yuewei Yang, Jingwei Sun, Ang Li, Hai Li, Yiran Chen

In this work, we propose a novel method, FedStyle, to learn a more generalized global model by infusing local style information with local content information for contrastive learning, and to learn more personalized local models by inducing local style information for downstream tasks.

Contrastive Learning Federated Learning +1

Preserving background sound in noise-robust voice conversion via multi-task learning

no code implementations6 Nov 2022 Jixun Yao, Yi Lei, Qing Wang, Pengcheng Guo, Ziqian Ning, Lei Xie, Hai Li, Junhui Liu, Danming Xie

Background sound is an informative form of art that is helpful in providing a more immersive experience in real-application voice conversion (VC) scenarios.

Multi-Task Learning Voice Conversion

Vox-Fusion: Dense Tracking and Mapping with Voxel-based Neural Implicit Representation

1 code implementation28 Oct 2022 Xingrui Yang, Hai Li, Hongjia Zhai, Yuhang Ming, Yuqian Liu, Guofeng Zhang

In this work, we present a dense tracking and mapping system named Vox-Fusion, which seamlessly fuses neural implicit representations with traditional volumetric fusion methods.

Approximate Computing and the Efficient Machine Learning Expedition

no code implementations2 Oct 2022 Jörg Henkel, Hai Li, Anand Raghunathan, Mehdi B. Tahoori, Swagath Venkataramani, Xiaoxuan Yang, Georgios Zervakis

In this work, we enlighten the synergistic nature of AxC and ML and elucidate the impact of AxC in designing efficient ML systems.

Descriptive

Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction

no code implementations30 Sep 2022 Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li

Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees.

Federated Learning Privacy Preserving

NashAE: Disentangling Representations through Adversarial Covariance Minimization

1 code implementation21 Sep 2022 Eric Yeats, Frank Liu, David Womble, Hai Li

We present a self-supervised method to disentangle factors of variation in high-dimensional data that does not rely on prior knowledge of the underlying variation profile (e. g., no assumptions on the number or distribution of the individual latent variables to be extracted).

Disentanglement

Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification

no code implementations9 Sep 2022 Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen

Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.

Anomaly Detection Out of Distribution (OOD) Detection

FADE: Enabling Federated Adversarial Training on Heterogeneous Resource-Constrained Edge Devices

no code implementations8 Sep 2022 Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen

However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices.

Adversarial Robustness Federated Learning +1

Learning and Compositionality: a Unification Attempt via Connectionist Probabilistic Programming

no code implementations26 Aug 2022 Ximing Qiao, Hai Li

We consider learning and compositionality as the key mechanisms towards simulating human-like intelligence.

Probabilistic Programming

Tunable Hybrid Proposal Networks for the Open World

no code implementations23 Aug 2022 Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen

Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.

object-detection Object Detection +1

Vox-Surf: Voxel-based Implicit Surface Representation

1 code implementation21 Aug 2022 Hai Li, Xingrui Yang, Hongjia Zhai, Yuqian Liu, Hujun Bao, Guofeng Zhang

Virtual content creation and interaction play an important role in modern 3D applications such as AR and VR.

valid

NASRec: Weight Sharing Neural Architecture Search for Recommender Systems

2 code implementations14 Jul 2022 Tunhou Zhang, Dehua Cheng, Yuchen He, Zhengxing Chen, Xiaoliang Dai, Liang Xiong, Feng Yan, Hai Li, Yiran Chen, Wei Wen

To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i. e., search space) to search the full architectures.

Click-Through Rate Prediction Neural Architecture Search +1

Privacy Leakage of Adversarial Training Models in Federated Learning Systems

1 code implementation21 Feb 2022 Jingyang Zhang, Yiran Chen, Hai Li

Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks.

Federated Learning

IQDUBBING: Prosody modeling based on discrete self-supervised speech representation for expressive voice conversion

no code implementations2 Jan 2022 Wendong Gan, Bolong Wen, Ying Yan, Haitao Chen, Zhichao Wang, Hongqiang Du, Lei Xie, Kaixuan Guo, Hai Li

Specifically, prosody vector is first extracted from pre-trained VQ-Wav2Vec model, where rich prosody information is embedded while most speaker and environment information are removed effectively by quantization.

Quantization Voice Conversion

A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness

no code implementations23 Oct 2021 Chang Song, Riya Ranjan, Hai Li

After quantization, the cost can be greatly saved, and the quantized models are more hardware friendly with acceptable accuracy loss.

Quantization

Global Vision Transformer Pruning with Hessian-Aware Saliency

1 code implementation CVPR 2023 Huanrui Yang, Hongxu Yin, Maying Shen, Pavlo Molchanov, Hai Li, Jan Kautz

This work aims on challenging the common design philosophy of the Vision Transformer (ViT) model with uniform dimension across all the stacked blocks in a model stage, where we redistribute the parameters both across transformer blocks and between different structures within the block via the first systematic attempt on global structural pruning.

Efficient ViTs Philosophy

Automated Mobile Attention KPConv Networks via A Wide & Deep Predictor

no code implementations29 Sep 2021 Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen

MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power.

3D Point Cloud Classification Feature Engineering +2

Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective

no code implementations3 Jul 2021 Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, Hai Li

Existing representation learning methods on graphs have achieved state-of-the-art performance on various graph-related tasks such as node classification, link prediction, etc.

Link Prediction Node Classification +2

Soteria: Provable Defense Against Privacy Leakage in Federated Learning From Representation Perspective

1 code implementation CVPR 2021 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.

Federated Learning Inference Attack

Enriching Source Style Transfer in Recognition-Synthesis based Non-Parallel Voice Conversion

no code implementations16 Jun 2021 Zhichao Wang, Xinyong Zhou, Fengyu Yang, Tao Li, Hongqiang Du, Lei Xie, Wendong Gan, Haitao Chen, Hai Li

Specifically, prosodic features are used to explicit model prosody, while VAE and reference encoder are used to implicitly model prosody, which take Mel spectrum and bottleneck feature as input respectively.

Style Transfer Voice Conversion

Mixture Outlier Exposure: Towards Out-of-Distribution Detection in Fine-grained Environments

1 code implementation7 Jun 2021 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li

We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.

Medical Image Classification Out-of-Distribution Detection +1

Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning

no code implementations21 Apr 2021 Haowen Fang, Brady Taylor, Ziru Li, Zaidao Mei, Hai Li, Qinru Qiu

This circuit implementation of the neuron model is simulated to demonstrate its ability to react to temporal spiking patterns with an adaptive threshold.

The Multi-speaker Multi-style Voice Cloning Challenge 2021

no code implementations5 Apr 2021 Qicong Xie, Xiaohai Tian, Guanghou Liu, Kun Song, Lei Xie, Zhiyong Wu, Hai Li, Song Shi, Haizhou Li, Fen Hong, Hui Bu, Xin Xu

The challenge consists of two tracks, namely few-shot track and one-shot track, where the participants are required to clone multiple target voices with 100 and 5 samples respectively.

Benchmarking Voice Cloning

FedCor: Correlation-Based Active Client Selection Strategy for Heterogeneous Federated Learning

no code implementations CVPR 2022 Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li, Yiran Chen

In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.

Federated Learning

Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?

no code implementations17 Mar 2021 Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen

During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.

The Untapped Potential of Off-the-Shelf Convolutional Neural Networks

no code implementations17 Mar 2021 Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen

Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.

Neural Architecture Search

A Case for 3D Integrated System Design for Neuromorphic Computing & AI Applications

no code implementations2 Mar 2021 Eren Kurshan, Hai Li, Mingoo Seok, Yuan Xie

Over the last decade, artificial intelligence has found many applications areas in the society.

BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization

1 code implementation ICLR 2021 Huanrui Yang, Lin Duan, Yiran Chen, Hai Li

Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated.

Neural Architecture Search Quantization

On Provable Backdoor Defense in Collaborative Learning

no code implementations19 Jan 2021 Ximing Qiao, Yuhua Bai, Siping Hu, Ang Li, Yiran Chen, Hai Li

The framework shows that the subset selection process, a deciding factor for subset aggregation methods, can be viewed as a code design problem.

backdoor defense

GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs

no code implementations8 Dec 2020 Binghui Wang, Ang Li, Hai Li, Yiran Chen

However, existing FL methods 1) perform poorly when data across clients are non-IID, 2) cannot handle data with new label domains, and 3) cannot leverage unlabeled data, while all these issues naturally happen in real-world graph-based problems.

Federated Learning General Classification +2

Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

4 code implementations8 Dec 2020 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.

Federated Learning

Fast IR Drop Estimation with Machine Learning

no code implementations26 Nov 2020 Zhiyao Xie, Hai Li, Xiaoqing Xu, Jiang Hu, Yiran Chen

IR drop constraint is a fundamental requirement enforced in almost all chip designs.

BIG-bench Machine Learning

Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function

1 code implementation1 Sep 2020 Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Hai Li, Yiran Chen

Specifically, we first introduce two influence functions, i. e., feature-label influence and label influence, that are defined on GNNs and label propagation (LP), respectively.

Node Classification

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

no code implementations1 Sep 2020 Houxiang Fan, Binghui Wang, Pan Zhou, Ang Li, Meng Pang, Zichuan Xu, Cai Fu, Hai Li, Yiran Chen

Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc.

Graph Embedding Link Prediction +2

LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets

1 code implementation7 Aug 2020 Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, Hai Li

Rather than learning a shared global model in classic federated learning, each client learns a personalized model via LotteryFL; the communication cost can be significantly reduced due to the compact size of lottery networks.

Federated Learning

NASGEM: Neural Architecture Search via Graph Embedding Method

no code implementations8 Jul 2020 Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen

To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.

Graph Embedding Graph Similarity +3

Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces

no code implementations12 Jun 2020 Chaofei Yang, Lei Ding, Yiran Chen, Hai Li

On the one hand, the quality of the synthesized faces is reduced with more visual artifacts such that the synthesized faces are more obviously fake or less convincing to human observers.

Face Swapping

PENNI: Pruned Kernel Sharing for Efficient CNN Inference

1 code implementation ICML 2020 Shi-Yu Li, Edward Hanson, Hai Li, Yiran Chen

Although state-of-the-art (SOTA) CNNs achieve outstanding performance on various tasks, their high computation demand and massive number of parameters make it difficult to deploy these SOTA CNNs onto resource-constrained devices.

Model Compression

Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification

1 code implementation20 Apr 2020 Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen

In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.

Neural Predictor for Neural Architecture Search

2 code implementations ECCV 2020 Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans

First we train N random architectures to generate N (architecture, validation accuracy) pairs and use them to train a regression model that predicts accuracy based on the architecture.

Neural Architecture Search regression

AutoShrink: A Topology-aware NAS for Discovering Efficient Neural Architecture

1 code implementation21 Nov 2019 Tunhou Zhang, Hsin-Pai Cheng, Zhenwen Li, Feng Yan, Chengyu Huang, Hai Li, Yiran Chen

Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1. 5 GPU hours, which is 7. 2x and 6. 7x faster than the crafting time of SOTA CNN and RNN models, respectively.

Image Classification Neural Architecture Search

Defending Neural Backdoors via Generative Distribution Modeling

1 code implementation NeurIPS 2019 Ximing Qiao, Yukun Yang, Hai Li

An original trigger used by an attacker to build the backdoored model represents only a point in the space.

Backdoor Attack Image Generation +1

Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training

no code implementations25 Sep 2019 Chunpeng Wu, Wei Wen, Yiran Chen, Hai Li

As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images.

Generative Adversarial Network Image Generation

Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment

1 code implementation18 Sep 2019 Jingyang Zhang, Huanrui Yang, Fan Chen, Yitu Wang, Hai Li

However, the power hungry analog-to-digital converters (ADCs) prevent the practical deployment of ReRAM-based DNN accelerators on end devices with limited chip area and power budget.

Towards Efficient and Secure Delivery of Data for Deep Learning with Privacy-Preserving

1 code implementation17 Sep 2019 Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li

When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.

Privacy Preserving

DASNet: Dynamic Activation Sparsity for Neural Network Efficiency Improvement

no code implementations13 Sep 2019 Qing Yang, Jiachen Mao, Zuoguan Wang, Hai Li

In addition to conventional compression techniques, e. g., weight pruning and quantization, removing unimportant activations can reduce the amount of data communication and the computation cost.

Quantization

Feedback Learning for Improving the Robustness of Neural Networks

no code implementations12 Sep 2019 Chang Song, Zuoguan Wang, Hai Li

Recent research studies revealed that neural networks are vulnerable to adversarial attacks.

Adversarial Robustness

DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures

1 code implementation ICLR 2020 Huanrui Yang, Wei Wen, Hai Li

Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.

Efficient Neural Network

Joint Regularization on Activations and Weights for Efficient Neural Network Pruning

no code implementations19 Jun 2019 Qing Yang, Wei Wen, Zuoguan Wang, Hai Li

With the rapid scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for improving deployment efficiency.

Efficient Neural Network Model Compression +1

SwiftNet: Using Graph Propagation as Meta-knowledge to Search Highly Representative Neural Architectures

1 code implementation19 Jun 2019 Hsin-Pai Cheng, Tunhou Zhang, Yukun Yang, Feng Yan, Shi-Yu Li, Harris Teague, Hai Li, Yiran Chen

Designing neural architectures for edge devices is subject to constraints of accuracy, inference latency, and computational cost.

Neural Architecture Search

Snooping Attacks on Deep Reinforcement Learning

1 code implementation28 May 2019 Matthew Inkawhich, Yiran Chen, Hai Li

In these snooping threat models, the adversary does not have the ability to interact with the target agent's environment, and can only eavesdrop on the action and reward signals being exchanged between agent and environment.

reinforcement-learning Reinforcement Learning (RL)

Integral Pruning on Activations and Weights for Efficient Neural Networks

no code implementations ICLR 2019 Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li

With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment.

Model Compression

HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array

no code implementations7 Jan 2019 Linghao Song, Jiachen Mao, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen

In this paper, inspired by recent work in machine learning systems, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators.

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

no code implementations ICLR 2019 Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li

The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.

Action Recognition Adversarial Attack +3

LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

no code implementations27 Nov 2018 Hsin-Pai Cheng, Patrick Yu, Haojing Hu, Feng Yan, Shi-Yu Li, Hai Li, Yiran Chen

Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time.

Privacy Preserving

Differentiable Fine-grained Quantization for Deep Neural Network Compression

1 code implementation NIPS Workshop CDNNRIA 2018 Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei HUANG, Feng Yan, Hai Li, Yiran Chen

Thus judiciously selecting different precision for different layers/structures can potentially produce more efficient models compared to traditional quantization methods by striking a better balance between accuracy and compression rate.

Neural Network Compression Quantization

Towards Efficient and Secure Delivery of Data for Training and Inference with Privacy-Preserving

1 code implementation20 Sep 2018 Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li

When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.

Privacy Preserving

DPatch: An Adversarial Patch Attack on Object Detectors

1 code implementation5 Jun 2018 Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, Yiran Chen

Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.

Object

Learning Intrinsic Sparse Structures within Long Short-Term Memory

no code implementations ICLR 2018 Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li

This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs.

Language Modelling Model Compression +1

GraphR: Accelerating Graph Processing Using ReRAM

no code implementations21 Aug 2017 Linghao Song, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen

GRAPHR gains a speedup of 1. 16x to 4. 12x, and is 3. 67x to 10. 96x more energy efficiency compared to PIM-based architecture.

Distributed, Parallel, and Cluster Computing Hardware Architecture

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

no code implementations27 May 2017 Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen

Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.

Coordinating Filters for Faster Deep Neural Networks

5 code implementations ICCV 2017 Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li

Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.

Generative Poisoning Attack Method Against Neural Networks

no code implementations3 Mar 2017 Chaofei Yang, Qing Wu, Hai Li, Yiran Chen

A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.

Group Scissor: Scaling Neuromorphic Computing Design to Large Neural Networks

no code implementations11 Feb 2017 Yandan Wang, Wei Wen, Beiye Liu, Donald Chiarulli, Hai Li

Following rank clipping, group connection deletion further reduces the routing area of LeNet and ConvNet to 8. 1\% and 52. 06\%, respectively.

Classification Accuracy Improvement for Neuromorphic Computing Systems with One-level Precision Synapses

no code implementations7 Jan 2017 Yandan Wang, Wei Wen, Linghao Song, Hai Li

Brain inspired neuromorphic computing has demonstrated remarkable advantages over traditional von Neumann architecture for its high energy efficiency and parallel data processing.

General Classification Image Classification +1

Learning Structured Sparsity in Deep Neural Networks

3 code implementations NeurIPS 2016 Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li

SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation.

Faster CNNs with Direct Sparse Convolutions and Guided Pruning

1 code implementation4 Aug 2016 Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey

Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels.

Cannot find the paper you are looking for? You can Submit a new open access paper.