Search Results for author: Yiran Chen

Found 116 papers, 44 papers with code

CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems

2 code implementations Findings of the Association for Computational Linguistics 2020 Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang

In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.

Text Summarization

Coordinating Filters for Faster Deep Neural Networks

5 code implementations ICCV 2017 Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li

Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.

Learning Structured Sparsity in Deep Neural Networks

3 code implementations NeurIPS 2016 Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li

SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation.

Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

4 code implementations8 Dec 2020 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.

Federated Learning

Faster CNNs with Direct Sparse Convolutions and Guided Pruning

1 code implementation4 Aug 2016 Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey

Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels.

Towards Building the Federated GPT: Federated Instruction Tuning

1 code implementation9 May 2023 Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Yufan Zhou, Guoyin Wang, Yiran Chen

This repository offers a foundational framework for exploring federated fine-tuning of LLMs using heterogeneous instructions across diverse categories.

Federated Learning

Advancing Real-time Pandemic Forecasting Using Large Language Models: A COVID-19 Case Study

2 code implementations10 Apr 2024 Hongru Du, Jianan Zhao, Yang Zhao, Shaochong Xu, Xihong Lin, Yiran Chen, Lauren M. Gardner, Hao, Yang

Forecasting the short-term spread of an ongoing disease outbreak is a formidable challenge due to the complexity of contributing factors, some of which can be characterized through interlinked, multi-modality variables such as epidemiological time series data, viral biology, population demographics, and the intersection of public policy and human behavior.

Representation Learning Time Series

DPatch: An Adversarial Patch Attack on Object Detectors

1 code implementation5 Jun 2018 Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, Yiran Chen

Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.

Object

LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets

1 code implementation7 Aug 2020 Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, Hai Li

Rather than learning a shared global model in classic federated learning, each client learns a personalized model via LotteryFL; the communication cost can be significantly reduced due to the compact size of lottery networks.

Federated Learning

Snooping Attacks on Deep Reinforcement Learning

1 code implementation28 May 2019 Matthew Inkawhich, Yiran Chen, Hai Li

In these snooping threat models, the adversary does not have the ability to interact with the target agent's environment, and can only eavesdrop on the action and reward signals being exchanged between agent and environment.

reinforcement-learning Reinforcement Learning (RL)

Soteria: Provable Defense Against Privacy Leakage in Federated Learning From Representation Perspective

1 code implementation CVPR 2021 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.

Federated Learning Inference Attack

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation6 Dec 2018 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.

Quantization

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation9 Oct 2019 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.

TRP: Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation30 Apr 2020 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.

Neural Predictor for Neural Architecture Search

2 code implementations ECCV 2020 Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans

First we train N random architectures to generate N (architecture, validation accuracy) pairs and use them to train a regression model that predicts accuracy based on the architecture.

Neural Architecture Search regression

Efficient Dataset Distillation via Minimax Diffusion

1 code implementation27 Nov 2023 Jianyang Gu, Saeed Vahidian, Vyacheslav Kungurtsev, Haonan Wang, Wei Jiang, Yang You, Yiran Chen

Observing that key factors for constructing an effective surrogate dataset are representativeness and diversity, we design additional minimax criteria in the generative training to enhance these facets for the generated images of diffusion models.

BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization

1 code implementation ICLR 2021 Huanrui Yang, Lin Duan, Yiran Chen, Hai Li

Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated.

Neural Architecture Search Quantization

NASRec: Weight Sharing Neural Architecture Search for Recommender Systems

2 code implementations14 Jul 2022 Tunhou Zhang, Dehua Cheng, Yuchen He, Zhengxing Chen, Xiaoliang Dai, Liang Xiong, Feng Yan, Hai Li, Yiran Chen, Wei Wen

To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i. e., search space) to search the full architectures.

Click-Through Rate Prediction Neural Architecture Search +1

Enhancing Scientific Papers Summarization with Citation Graph

1 code implementation7 Apr 2021 Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang

Previous work for text summarization in scientific domain mainly focused on the content of the input document, but seldom considering its citation network.

Text Summarization

Mixture Outlier Exposure: Towards Out-of-Distribution Detection in Fine-grained Environments

1 code implementation7 Jun 2021 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li

We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.

Medical Image Classification Out-of-Distribution Detection +1

Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification

1 code implementation20 Apr 2020 Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen

In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.

PANDA: Architecture-Level Power Evaluation by Unifying Analytical and Machine Learning Solutions

1 code implementation14 Dec 2023 Qijun Zhang, Shiyu Li, Guanglei Zhou, Jingyu Pan, Chen-Chia Chang, Yiran Chen, Zhiyao Xie

Based on the formulation, we propose PANDA, an innovative architecture-level solution that combines the advantages of analytical and ML power models.

Privacy Leakage of Adversarial Training Models in Federated Learning Systems

1 code implementation21 Feb 2022 Jingyang Zhang, Yiran Chen, Hai Li

Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks.

Federated Learning

Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization

1 code implementation Findings (EMNLP) 2021 Yiran Chen, PengFei Liu, Xipeng Qiu

In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation.

Data Augmentation

PENNI: Pruned Kernel Sharing for Efficient CNN Inference

1 code implementation ICML 2020 Shi-Yu Li, Edward Hanson, Hai Li, Yiran Chen

Although state-of-the-art (SOTA) CNNs achieve outstanding performance on various tasks, their high computation demand and massive number of parameters make it difficult to deploy these SOTA CNNs onto resource-constrained devices.

Model Compression

PIDS: Joint Point Interaction-Dimension Search for 3D Point Cloud

1 code implementation28 Nov 2022 Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen

In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data.

Neural Architecture Search Robust 3D Semantic Segmentation +1

Towards Efficient and Secure Delivery of Data for Training and Inference with Privacy-Preserving

1 code implementation20 Sep 2018 Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li

When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.

Privacy Preserving

Towards Efficient and Secure Delivery of Data for Deep Learning with Privacy-Preserving

1 code implementation17 Sep 2019 Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li

When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.

Privacy Preserving

AutoShrink: A Topology-aware NAS for Discovering Efficient Neural Architecture

1 code implementation21 Nov 2019 Tunhou Zhang, Hsin-Pai Cheng, Zhenwen Li, Feng Yan, Chengyu Huang, Hai Li, Yiran Chen

Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1. 5 GPU hours, which is 7. 2x and 6. 7x faster than the crafting time of SOTA CNN and RNN models, respectively.

Image Classification Neural Architecture Search

Group Distributionally Robust Dataset Distillation with Risk Minimization

1 code implementation7 Feb 2024 Saeed Vahidian, Mingyu Wang, Jianyang Gu, Vyacheslav Kungurtsev, Wei Jiang, Yiran Chen

However, targeting the training dataset must be thought of as auxiliary in the same sense that the training set is an approximate substitute for the population distribution, and the latter is the data of interest.

Federated Learning Neural Architecture Search +1

Differentiable Fine-grained Quantization for Deep Neural Network Compression

1 code implementation NIPS Workshop CDNNRIA 2018 Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei HUANG, Feng Yan, Hai Li, Yiran Chen

Thus judiciously selecting different precision for different layers/structures can potentially produce more efficient models compared to traditional quantization methods by striking a better balance between accuracy and compression rate.

Neural Network Compression Quantization

Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function

1 code implementation1 Sep 2020 Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Hai Li, Yiran Chen

Specifically, we first introduce two influence functions, i. e., feature-label influence and label influence, that are defined on GNNs and label propagation (LP), respectively.

Node Classification

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

1 code implementation3 Dec 2023 Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen

This process allows local devices to train smaller surrogate models while enabling the training of a larger global model on the server, effectively minimizing resource utilization.

Federated Learning

SwiftNet: Using Graph Propagation as Meta-knowledge to Search Highly Representative Neural Architectures

1 code implementation19 Jun 2019 Hsin-Pai Cheng, Tunhou Zhang, Yukun Yang, Feng Yan, Shi-Yu Li, Harris Teague, Hai Li, Yiran Chen

Designing neural architectures for edge devices is subject to constraints of accuracy, inference latency, and computational cost.

Neural Architecture Search

SIO: Synthetic In-Distribution Data Benefits Out-of-Distribution Detection

1 code implementation25 Mar 2023 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley, Yiran Chen, Hai Li

Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training.

Out-of-Distribution Detection

SD-NAE: Generating Natural Adversarial Examples with Stable Diffusion

1 code implementation21 Nov 2023 Yueqian Lin, Jingyang Zhang, Yiran Chen, Hai Li

Natural Adversarial Examples (NAEs), images arising naturally from the environment and capable of deceiving classifiers, are instrumental in robustly evaluating and identifying vulnerabilities in trained models.

valid

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

no code implementations27 May 2017 Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen

Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.

Learning Intrinsic Sparse Structures within Long Short-Term Memory

no code implementations ICLR 2018 Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li

This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs.

Language Modelling Model Compression +1

Spintronics based Stochastic Computing for Efficient Bayesian Inference System

no code implementations3 Nov 2017 Xiaotao Jia, Jianlei Yang, Zhaohao Wang, Yiran Chen, Hai, Li, Weisheng Zhao

Bayesian inference is an effective approach for solving statistical learning problems especially with uncertainty and incompleteness.

Bayesian Inference

Generative Poisoning Attack Method Against Neural Networks

no code implementations3 Mar 2017 Chaofei Yang, Qing Wu, Hai Li, Yiran Chen

A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.

2PFPCE: Two-Phase Filter Pruning Based on Conditional Entropy

no code implementations6 Sep 2018 Chuhan Min, Aosen Wang, Yiran Chen, Wenyao Xu, Xin Chen

To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely \textit{2PFPCE}, to compress the CNN models and reduce the inference time with marginal performance degradation.

Edge-computing Neural Network Compression +1

Generalized Inverse Optimization through Online Learning

no code implementations NeurIPS 2018 Chaosheng Dong, Yiran Chen, Bo Zeng

Inverse optimization is a powerful paradigm for learning preferences and restrictions that explain the behavior of a decision maker, based on a set of external signal and the corresponding decision pairs.

LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

no code implementations27 Nov 2018 Hsin-Pai Cheng, Patrick Yu, Haojing Hu, Feng Yan, Shi-Yu Li, Hai Li, Yiran Chen

Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time.

Privacy Preserving

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

no code implementations ICLR 2019 Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li

The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.

Action Recognition Adversarial Attack +3

Integral Pruning on Activations and Weights for Efficient Neural Networks

no code implementations ICLR 2019 Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li

With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment.

Model Compression

HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array

no code implementations7 Jan 2019 Linghao Song, Jiachen Mao, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen

In this paper, inspired by recent work in machine learning systems, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators.

Low Power Inference for On-Device Visual Recognition with a Quantization-Friendly Solution

no code implementations12 Mar 2019 Chen Feng, Tao Sheng, Zhiyu Liang, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, Matthew Ardi, Alexander C. Berg, Yiran Chen, Bo Chen, Kent Gauen, Yung-Hsiang Lu

The IEEE Low-Power Image Recognition Challenge (LPIRC) is an annual competition started in 2015 that encourages joint hardware and software solutions for computer vision systems with low latency and power.

Quantization

Accelerating CNN Training by Pruning Activation Gradients

no code implementations ECCV 2020 Xucheng Ye, Pengcheng Dai, Junyu Luo, Xin Guo, Yingjie Qi, Jianlei Yang, Yiran Chen

Sparsification is an efficient approach to accelerate CNN inference, but it is challenging to take advantage of sparsity in training procedure because the involved gradients are dynamically changed.

DeepObfuscator: Obfuscating Intermediate Representations with Privacy-Preserving Adversarial Learning on Smartphones

no code implementations9 Sep 2019 Ang Li, Jiayi Guo, Huanrui Yang, Flora D. Salim, Yiran Chen

Our experiments on CelebA and LFW datasets show that the quality of the reconstructed images from the obfuscated features of the raw image is dramatically decreased from 0. 9458 to 0. 3175 in terms of multi-scale structural similarity.

General Classification Image Classification +3

GraphR: Accelerating Graph Processing Using ReRAM

no code implementations21 Aug 2017 Linghao Song, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen

GRAPHR gains a speedup of 1. 16x to 4. 12x, and is 3. 67x to 10. 96x more energy efficiency compared to PIM-based architecture.

Distributed, Parallel, and Cluster Computing Hardware Architecture

Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training

no code implementations25 Sep 2019 Chunpeng Wu, Wei Wen, Yiran Chen, Hai Li

As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images.

Generative Adversarial Network Image Generation

Transferable Perturbations of Deep Feature Distributions

no code implementations ICLR 2020 Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen

Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.

Adversarial Attack

MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones

no code implementations24 May 2020 Ang Li, Chunpeng Wu, Yiran Chen, Bin Ni

Instead of performing stylization frame by frame, only key frames in the original video are processed by a pre-trained deep neural network (DNN) on edge servers, while the rest of stylized intermediate frames are generated by our designed optical-flow-based frame interpolation algorithm on mobile phones.

Federated Learning Optical Flow Estimation +2

TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework for Deep Learning with Anonymized Intermediate Representations

no code implementations23 May 2020 Ang Li, Yixiao Duan, Huanrui Yang, Yiran Chen, Jianlei Yang

The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.

Exploiting Spin-Orbit Torque Devices as Reconfigurable Logic for Circuit Obfuscation

no code implementations8 Feb 2018 Jianlei Yang, Xueyan Wang, Qiang Zhou, Zhaohao Wang, Hai, Li, Yiran Chen, Weisheng Zhao

Circuit obfuscation is a frequently used approach to conceal logic functionalities in order to prevent reverse engineering attacks on fabricated chips.

Emerging Technologies Cryptography and Security

Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces

no code implementations12 Jun 2020 Chaofei Yang, Lei Ding, Yiran Chen, Hai Li

On the one hand, the quality of the synthesized faces is reduced with more visual artifacts such that the synthesized faces are more obviously fake or less convincing to human observers.

Face Swapping

NASGEM: Neural Architecture Search via Graph Embedding Method

no code implementations8 Jul 2020 Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen

To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.

Graph Embedding Graph Similarity +3

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

no code implementations1 Sep 2020 Houxiang Fan, Binghui Wang, Pan Zhou, Ang Li, Meng Pang, Zichuan Xu, Cai Fu, Hai Li, Yiran Chen

Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc.

Graph Embedding Link Prediction +2

Net2: A Graph Attention Network Method Customized for Pre-Placement Net Length Estimation

no code implementations27 Nov 2020 Zhiyao Xie, Rongjian Liang, Xiaoqing Xu, Jiang Hu, Yixiao Duan, Yiran Chen

Net length is a key proxy metric for optimizing timing and power across various stages of a standard digital design flow.

Graph Attention

FIST: A Feature-Importance Sampling and Tree-Based Method for Automatic Design Flow Parameter Tuning

no code implementations26 Nov 2020 Zhiyao Xie, Guan-Qi Fang, Yu-Hung Huang, Haoxing Ren, Yanqing Zhang, Brucek Khailany, Shao-Yun Fang, Jiang Hu, Yiran Chen, Erick Carvajal Barboza

Experimental results on benchmark circuits show that our approach achieves 25% improvement in design quality or 37% reduction in sampling cost compared to random forest method, which is the kernel of a highly cited previous work.

BIG-bench Machine Learning Clustering +1

Fast IR Drop Estimation with Machine Learning

no code implementations26 Nov 2020 Zhiyao Xie, Hai Li, Xiaoqing Xu, Jiang Hu, Yiran Chen

IR drop constraint is a fundamental requirement enforced in almost all chip designs.

BIG-bench Machine Learning

Automatic Routability Predictor Development Using Neural Architecture Search

no code implementations3 Dec 2020 Chen-Chia Chang, Jingyu Pan, Tunhou Zhang, Zhiyao Xie, Jiang Hu, Weiyi Qi, Chun-Wei Lin, Rongjian Liang, Joydeep Mitra, Elias Fallon, Yiran Chen

The rise of machine learning technology inspires a boom of its applications in electronic design automation (EDA) and helps improve the degree of automation in chip designs.

BIG-bench Machine Learning Neural Architecture Search

GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs

no code implementations8 Dec 2020 Binghui Wang, Ang Li, Hai Li, Yiran Chen

However, existing FL methods 1) perform poorly when data across clients are non-IID, 2) cannot handle data with new label domains, and 3) cannot leverage unlabeled data, while all these issues naturally happen in real-world graph-based problems.

Federated Learning General Classification +2

On Provable Backdoor Defense in Collaborative Learning

no code implementations19 Jan 2021 Ximing Qiao, Yuhua Bai, Siping Hu, Ang Li, Yiran Chen, Hai Li

The framework shows that the subset selection process, a deciding factor for subset aggregation methods, can be viewed as a code design problem.

backdoor defense

Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?

no code implementations17 Mar 2021 Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen

During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.

The Untapped Potential of Off-the-Shelf Convolutional Neural Networks

no code implementations17 Mar 2021 Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen

Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.

Neural Architecture Search

FedCor: Correlation-Based Active Client Selection Strategy for Heterogeneous Federated Learning

no code implementations CVPR 2022 Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li, Yiran Chen

In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.

Federated Learning

Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective

no code implementations3 Jul 2021 Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, Hai Li

Existing representation learning methods on graphs have achieved state-of-the-art performance on various graph-related tasks such as node classification, link prediction, etc.

Link Prediction Node Classification +2

Lithography Hotspot Detection via Heterogeneous Federated Learning with Local Adaptation

no code implementations9 Jul 2021 Xuezhong Lin, Jingyu Pan, Jinming Xu, Yiran Chen, Cheng Zhuo

Moreover, the design houses are also unwilling to directly share such data with the other houses to build a unified model, which can be ineffective for the design house with unique design patterns due to data insufficiency.

Federated Learning

Automated Mobile Attention KPConv Networks via A Wide & Deep Predictor

no code implementations29 Sep 2021 Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen

MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power.

3D Point Cloud Classification Feature Engineering +2

HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

1 code implementation23 Nov 2021 Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen

We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance.

Quantization

The Dark Side: Security Concerns in Machine Learning for EDA

no code implementations20 Mar 2022 Zhiyao Xie, Jingyu Pan, Chen-Chia Chang, Yiran Chen

The growing IC complexity has led to a compelling need for design efficiency improvement through new electronic design automation (EDA) methodologies.

BIG-bench Machine Learning

Towards Collaborative Intelligence: Routability Estimation based on Decentralized Private Data

no code implementations30 Mar 2022 Jingyu Pan, Chen-Chia Chang, Zhiyao Xie, Ang Li, Minxue Tang, Tunhou Zhang, Jiang Hu, Yiran Chen

To further strengthen the results, we co-design a customized ML model FLNet and its personalization under the decentralized training scenario.

Federated Learning

Tunable Hybrid Proposal Networks for the Open World

no code implementations23 Aug 2022 Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen

Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.

object-detection Object Detection +1

FADE: Enabling Federated Adversarial Training on Heterogeneous Resource-Constrained Edge Devices

no code implementations8 Sep 2022 Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen

However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices.

Adversarial Robustness Federated Learning +1

Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification

no code implementations9 Sep 2022 Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen

Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.

Anomaly Detection Out of Distribution (OOD) Detection

Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction

no code implementations30 Sep 2022 Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li

Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees.

Federated Learning Privacy Preserving

Join-Chain Network: A Logical Reasoning View of the Multi-head Attention in Transformer

no code implementations6 Oct 2022 Jianyi Zhang, Yiran Chen, Jianshu Chen

Developing neural architectures that are capable of logical reasoning has become increasingly important for a wide range of applications (e. g., natural language processing).

Logical Reasoning Natural Language Understanding

Rethinking Normalization Methods in Federated Learning

no code implementations7 Oct 2022 Zhixu Du, Jingwei Sun, Ang Li, Pin-Yu Chen, Jianyi Zhang, Hai "Helen" Li, Yiran Chen

We also show that layer normalization is a better choice in FL which can mitigate the external covariate shift and improve the performance of the global model.

Federated Learning

Language-specific Effects on Automatic Speech Recognition Errors for World Englishes

no code implementations COLING 2022 June Choe, Yiran Chen, May Pik Yu Chan, Aini Li, Xin Gao, Nicole Holliday

Despite recent advancements in automated speech recognition (ASR) technologies, reports of unequal performance across speakers of different demographic groups abound.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

More Generalized and Personalized Unsupervised Representation Learning In A Distributed System

no code implementations11 Nov 2022 Yuewei Yang, Jingwei Sun, Ang Li, Hai Li, Yiran Chen

In this work, we propose a novel method, FedStyle, to learn a more generalized global model by infusing local style information with local content information for contrastive learning, and to learn more personalized local models by inducing local style information for downstream tasks.

Contrastive Learning Federated Learning +1

Biologically Plausible Learning on Neuromorphic Hardware Architectures

no code implementations29 Dec 2022 Christopher Wolters, Brady Taylor, Edward Hanson, Xiaoxuan Yang, Ulf Schlichtmann, Yiran Chen

Using the benchmarking framework DNN+NeuroSim, we investigate the impact of hardware nonidealities and quantization on algorithm performance, as well as how network topologies and algorithm-level design choices can scale latency, energy and area consumption of a chip.

Benchmarking Quantization

Analog, In-memory Compute Architectures for Artificial Intelligence

no code implementations13 Jan 2023 Patrick Bowen, Guy Regev, Nir Regev, Bruno Pedroni, Edward Hanson, Yiran Chen

This paper presents an analysis of the fundamental limits on energy efficiency in both digital and analog in-memory computing architectures, and compares their performance to single instruction, single data (scalar) machines specifically in the context of machine inference.

PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration

no code implementations24 Mar 2023 Richard Petri, Grace Li Zhang, Yiran Chen, Ulf Schlichtmann, Bing Li

To address this challenge, we propose PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations.

Efficient Neural Network

Communication-Efficient Vertical Federated Learning with Limited Overlapping Samples

no code implementations ICCV 2023 Jingwei Sun, Ziyue Xu, Dong Yang, Vishwesh Nath, Wenqi Li, Can Zhao, Daguang Xu, Yiran Chen, Holger R. Roth

We propose a practical vertical federated learning (VFL) framework called \textbf{one-shot VFL} that can solve the communication bottleneck and the problem of limited overlapping samples simultaneously based on semi-supervised learning.

Vertical Federated Learning

Robust and IP-Protecting Vertical Federated Learning against Unexpected Quitting of Parties

no code implementations28 Mar 2023 Jingwei Sun, Zhixu Du, Anna Dai, Saleh Baghersalimi, Alireza Amirshahi, David Atienza, Yiran Chen

In this paper, we propose \textbf{Party-wise Dropout} to improve the VFL model's robustness against the unexpected exit of passive parties and a defense method called \textbf{DIMIP} to protect the active party's IP in the deployment phase.

Vertical Federated Learning

PrivaScissors: Enhance the Privacy of Collaborative Inference through the Lens of Mutual Information

no code implementations17 May 2023 Lin Duan, Jingwei Sun, Yiran Chen, Maria Gorlatova

Edge-cloud collaborative inference empowers resource-limited IoT devices to support deep learning applications without disclosing their raw data to the cloud server, thus preserving privacy.

Collaborative Inference

LISSNAS: Locality-based Iterative Search Space Shrinkage for Neural Architecture Search

no code implementations6 Jul 2023 Bhavna Gopal, Arjun Sridhar, Tunhou Zhang, Yiran Chen

We propose LISSNAS, an automated algorithm that shrinks a large space into a diverse, small search space with SOTA search performance.

Efficient Exploration Neural Architecture Search

FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models

no code implementations2 Oct 2023 Jingwei Sun, Ziyue Xu, Hongxu Yin, Dong Yang, Daguang Xu, Yiran Chen, Holger R. Roth

However, applying FL to finetune PLMs is hampered by challenges, including restricted model parameter access, high computational requirements, and communication overheads.

Federated Learning Privacy Preserving

SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models

no code implementations29 Oct 2023 Zhixu Du, Shiyu Li, Yuhao Wu, Xiangyu Jiang, Jingwei Sun, Qilin Zheng, Yongkai Wu, Ang Li, Hai "Helen" Li, Yiran Chen

Specifically, SiDA attains a remarkable speedup in MoE inference with up to 3. 93X throughput increasing, up to 75% latency reduction, and up to 80% GPU memory saving with down to 1% performance drop.

Farthest Greedy Path Sampling for Two-shot Recommender Search

no code implementations31 Oct 2023 Yufan Cao, Tunhou Zhang, Wei Wen, Feng Yan, Hai Li, Yiran Chen

FGPS enhances path diversity to facilitate more comprehensive supernet exploration, while emphasizing path quality to ensure the effective identification and utilization of promising architectures.

Click-Through Rate Prediction Neural Architecture Search

DistDNAS: Search Efficient Feature Interactions within 2 Hours

no code implementations1 Nov 2023 Tunhou Zhang, Wei Wen, Igor Fedorov, Xi Liu, Buyun Zhang, Fangqiu Han, Wen-Yen Chen, Yiping Han, Feng Yan, Hai Li, Yiran Chen

To optimize search efficiency, DistDNAS distributes the search and aggregates the choice of optimal interaction modules on varying data dates, achieving over 25x speed-up and reducing search cost from 2 days to 2 hours.

Recommendation Systems

DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining

no code implementations8 Nov 2023 Martin Kuo, Jianyi Zhang, Yiran Chen

Building on the cost-efficient pretraining advancements brought about by Crammed BERT, we enhance its performance and interpretability further by introducing a novel pretrained model Dependency Agreement Crammed BERT (DACBERT) and its two-stage pretraining framework - Dependency Agreement Pretraining.

MRPC Natural Language Understanding +1

EDALearn: A Comprehensive RTL-to-Signoff EDA Benchmark for Democratized and Reproducible ML for EDA Research

no code implementations4 Dec 2023 Jingyu Pan, Chen-Chia Chang, Zhiyao Xie, Yiran Chen

The application of Machine Learning (ML) in Electronic Design Automation (EDA) for Very Large-Scale Integration (VLSI) design has garnered significant research attention.

Peeking Behind the Curtains of Residual Learning

no code implementations13 Feb 2024 Tunhou Zhang, Feng Yan, Hai Li, Yiran Chen

The utilization of residual learning has become widespread in deep and scalable neural nets.

Cannot find the paper you are looking for? You can Submit a new open access paper.