no code implementations • COLING 2022 • June Choe, Yiran Chen, May Pik Yu Chan, Aini Li, Xin Gao, Nicole Holliday
Despite recent advancements in automated speech recognition (ASR) technologies, reports of unequal performance across speakers of different demographic groups abound.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
1 code implementation • Findings (EMNLP) 2021 • Yiran Chen, PengFei Liu, Xipeng Qiu
In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation.
1 code implementation • 9 May 2023 • Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Guoyin Wang, Yiran Chen
This repository offers a foundational framework for exploring federated fine-tuning of LLMs using heterogeneous instructions across diverse categories.
no code implementations • 28 Mar 2023 • Jingwei Sun, Ziyue Xu, Dong Yang, Vishwesh Nath, Wenqi Li, Can Zhao, Daguang Xu, Yiran Chen, Holger R. Roth
We propose a practical vertical federated learning (VFL) framework called \textbf{one-shot VFL} that can solve the communication bottleneck and the problem of limited overlapping samples simultaneously based on semi-supervised learning.
no code implementations • 28 Mar 2023 • Jingwei Sun, Zhixu Du, Anna Dai, Saleh Baghersalimi, Alireza Amirshahi, David Atienza, Yiran Chen
In this paper, we propose \textbf{Party-wise Dropout} to improve the VFL model's robustness against the unexpected exit of passive parties and a defense method called \textbf{DIMIP} to protect the active party's IP in the deployment phase.
1 code implementation • 25 Mar 2023 • Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley, Yiran Chen, Hai Li
Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training.
no code implementations • 24 Mar 2023 • Richard Petri, Grace Li Zhang, Yiran Chen, Ulf Schlichtmann, Bing Li
To address this challenge, we propose PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations.
no code implementations • 13 Jan 2023 • Patrick Bowen, Guy Regev, Nir Regev, Bruno Pedroni, Edward Hanson, Yiran Chen
This paper presents an analysis of the fundamental limits on energy efficiency in both digital and analog in-memory computing architectures, and compares their performance to single instruction, single data (scalar) machines specifically in the context of machine inference.
no code implementations • 29 Dec 2022 • Christopher Wolters, Brady Taylor, Edward Hanson, Xiaoxuan Yang, Ulf Schlichtmann, Yiran Chen
Using the benchmarking framework DNN+NeuroSim, we investigate the impact of hardware nonidealities and quantization on algorithm performance, as well as how network topologies and algorithm-level design choices can scale latency, energy and area consumption of a chip.
1 code implementation • 28 Nov 2022 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data.
Ranked #6 on
Robust 3D Semantic Segmentation
on SemanticKITTI-C
Neural Architecture Search
Robust 3D Semantic Segmentation
+1
no code implementations • 11 Nov 2022 • Yuewei Yang, Jingwei Sun, Ang Li, Hai Li, Yiran Chen
In this work, we propose a novel method, FedStyle, to learn a more generalized global model by infusing local style information with local content information for contrastive learning, and to learn more personalized local models by inducing local style information for downstream tasks.
no code implementations • 7 Oct 2022 • Zhixu Du, Jingwei Sun, Ang Li, Pin-Yu Chen, Jianyi Zhang, Hai "Helen" Li, Yiran Chen
We also show that layer normalization is a better choice in FL which can mitigate the external covariate shift and improve the performance of the global model.
no code implementations • 6 Oct 2022 • Jianyi Zhang, Yiran Chen, Jianshu Chen
Developing neural architectures that are capable of logical reasoning has become increasingly important for a wide range of applications (e. g., natural language processing).
no code implementations • 30 Sep 2022 • Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li
Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees.
no code implementations • 9 Sep 2022 • Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen
Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.
no code implementations • 8 Sep 2022 • Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen
However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices.
no code implementations • 23 Aug 2022 • Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen
Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.
2 code implementations • 14 Jul 2022 • Tunhou Zhang, Dehua Cheng, Yuchen He, Zhengxing Chen, Xiaoliang Dai, Liang Xiong, Feng Yan, Hai Li, Yiran Chen, Wei Wen
To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i. e., search space) to search the full architectures.
no code implementations • 30 Mar 2022 • Jingyu Pan, Chen-Chia Chang, Zhiyao Xie, Ang Li, Minxue Tang, Tunhou Zhang, Jiang Hu, Yiran Chen
To further strengthen the results, we co-design a customized ML model FLNet and its personalization under the decentralized training scenario.
no code implementations • 20 Mar 2022 • Zhiyao Xie, Jingyu Pan, Chen-Chia Chang, Yiran Chen
The growing IC complexity has led to a compelling need for design efficiency improvement through new electronic design automation (EDA) methodologies.
1 code implementation • 21 Feb 2022 • Jingyang Zhang, Yiran Chen, Hai Li
Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks.
1 code implementation • 23 Nov 2021 • Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen
We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance.
1 code implementation • NeurIPS 2021 • Jingwei Sun, Ang Li, Louis DiValentin, Amin Hassanzadeh, Yiran Chen, Hai Li
Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC.
no code implementations • 29 Sep 2021 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power.
1 code implementation • Findings (NAACL) 2022 • Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze Chen, Hao Zhou, Lei LI
We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation.
no code implementations • 9 Jul 2021 • Xuezhong Lin, Jingyu Pan, Jinming Xu, Yiran Chen, Cheng Zhuo
Moreover, the design houses are also unwilling to directly share such data with the other houses to build a unified model, which can be ineffective for the design house with unique design patterns due to data insufficiency.
no code implementations • 3 Jul 2021 • Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, Hai Li
Existing representation learning methods on graphs have achieved state-of-the-art performance on various graph-related tasks such as node classification, link prediction, etc.
1 code implementation • CVPR 2021 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.
1 code implementation • 7 Jun 2021 • Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li
We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.
Medical Image Classification
Out-of-Distribution Detection
+1
1 code implementation • 7 Apr 2021 • Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang
Previous work for text summarization in scientific domain mainly focused on the content of the input document, but seldom considering its citation network.
no code implementations • CVPR 2022 • Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li, Yiran Chen
In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.
no code implementations • 17 Mar 2021 • Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen
During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.
no code implementations • 17 Mar 2021 • Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen
Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.
1 code implementation • ICLR 2021 • Huanrui Yang, Lin Duan, Yiran Chen, Hai Li
Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated.
no code implementations • 19 Jan 2021 • Ximing Qiao, Yuhua Bai, Siping Hu, Ang Li, Yiran Chen, Hai Li
The framework shows that the subset selection process, a deciding factor for subset aggregation methods, can be viewed as a code design problem.
4 code implementations • 8 Dec 2020 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
no code implementations • 8 Dec 2020 • Binghui Wang, Ang Li, Hai Li, Yiran Chen
However, existing FL methods 1) perform poorly when data across clients are non-IID, 2) cannot handle data with new label domains, and 3) cannot leverage unlabeled data, while all these issues naturally happen in real-world graph-based problems.
no code implementations • 3 Dec 2020 • Chen-Chia Chang, Jingyu Pan, Tunhou Zhang, Zhiyao Xie, Jiang Hu, Weiyi Qi, Chun-Wei Lin, Rongjian Liang, Joydeep Mitra, Elias Fallon, Yiran Chen
The rise of machine learning technology inspires a boom of its applications in electronic design automation (EDA) and helps improve the degree of automation in chip designs.
no code implementations • 30 Nov 2020 • Hsin-Pai Cheng, Feng Liang, Meng Li, Bowen Cheng, Feng Yan, Hai Li, Vikas Chandra, Yiran Chen
We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation.
Ranked #5 on
Multi-Person Pose Estimation
on COCO test-dev
no code implementations • 27 Nov 2020 • Zhiyao Xie, Rongjian Liang, Xiaoqing Xu, Jiang Hu, Yixiao Duan, Yiran Chen
Net length is a key proxy metric for optimizing timing and power across various stages of a standard digital design flow.
no code implementations • 26 Nov 2020 • Zhiyao Xie, Hai Li, Xiaoqing Xu, Jiang Hu, Yiran Chen
IR drop constraint is a fundamental requirement enforced in almost all chip designs.
no code implementations • 26 Nov 2020 • Zhiyao Xie, Guan-Qi Fang, Yu-Hung Huang, Haoxing Ren, Yanqing Zhang, Brucek Khailany, Shao-Yun Fang, Jiang Hu, Yiran Chen, Erick Carvajal Barboza
Experimental results on benchmark circuits show that our approach achieves 25% improvement in design quality or 37% reduction in sampling cost compared to random forest method, which is the kernel of a highly cited previous work.
no code implementations • 26 Nov 2020 • Zhiyao Xie, Haoxing Ren, Brucek Khailany, Ye Sheng, Santosh Santosh, Jiang Hu, Yiran Chen
Moreover, the proposed CNN model is general and transferable to different designs.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang
In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.
no code implementations • 1 Sep 2020 • Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Cai Fu, Hai Li, Yiran Chen
Next, we reformulate the evasion attack against GNNs to be related to calculating label influence on LP, which is applicable to multi-layer GNNs and does not need to know the GNN model.
no code implementations • 1 Sep 2020 • Houxiang Fan, Binghui Wang, Pan Zhou, Ang Li, Meng Pang, Zichuan Xu, Cai Fu, Hai Li, Yiran Chen
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc.
1 code implementation • 7 Aug 2020 • Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, Hai Li
Rather than learning a shared global model in classic federated learning, each client learns a personalized model via LotteryFL; the communication cost can be significantly reduced due to the compact size of lottery networks.
no code implementations • 21 Jul 2020 • Pengcheng Dai, Jianlei Yang, Xucheng Ye, Xingzhou Cheng, Junyu Luo, Linghao Song, Yiran Chen, Weisheng Zhao
In this paper, \textit{SparseTrain} is proposed to accelerate CNN training by fully exploiting the sparsity.
no code implementations • 8 Jul 2020 • Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen
To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.
no code implementations • 12 Jun 2020 • Chaofei Yang, Lei Ding, Yiran Chen, Hai Li
On the one hand, the quality of the synthesized faces is reduced with more visual artifacts such that the synthesized faces are more obviously fake or less convincing to human observers.
no code implementations • 24 May 2020 • Ang Li, Chunpeng Wu, Yiran Chen, Bin Ni
Instead of performing stylization frame by frame, only key frames in the original video are processed by a pre-trained deep neural network (DNN) on edge servers, while the rest of stylized intermediate frames are generated by our designed optical-flow-based frame interpolation algorithm on mobile phones.
no code implementations • 23 May 2020 • Ang Li, Yixiao Duan, Huanrui Yang, Yiran Chen, Jianlei Yang
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
1 code implementation • ICML 2020 • Shi-Yu Li, Edward Hanson, Hai Li, Yiran Chen
Although state-of-the-art (SOTA) CNNs achieve outstanding performance on various tasks, their high computation demand and massive number of parameters make it difficult to deploy these SOTA CNNs onto resource-constrained devices.
1 code implementation • 30 Apr 2020 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.
no code implementations • NeurIPS 2020 • Nathan Inkawhich, Kevin J Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
no code implementations • ICLR 2020 • Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen
Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.
1 code implementation • 20 Apr 2020 • Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen
In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
2 code implementations • ACL 2020 • Ming Zhong, PengFei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
Ranked #1 on
Text Summarization
on BBC XSum
2 code implementations • ECCV 2020 • Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans
First we train N random architectures to generate N (architecture, validation accuracy) pairs and use them to train a regression model that predicts accuracy based on the architecture.
1 code implementation • 21 Nov 2019 • Tunhou Zhang, Hsin-Pai Cheng, Zhenwen Li, Feng Yan, Chengyu Huang, Hai Li, Yiran Chen
Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1. 5 GPU hours, which is 7. 2x and 6. 7x faster than the crafting time of SOTA CNN and RNN models, respectively.
no code implementations • 25 Oct 2019 • Jingchi Zhang, Jonathan Huang, Michael Deisher, Hai Li, Yiran Chen
Recently, deep neural networks (DNN) have been widely used in speaker recognition area.
1 code implementation • 9 Oct 2019 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.
no code implementations • 25 Sep 2019 • Chunpeng Wu, Wei Wen, Yiran Chen, Hai Li
As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images.
1 code implementation • 17 Sep 2019 • Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li
When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.
no code implementations • 9 Sep 2019 • Ang Li, Jiayi Guo, Huanrui Yang, Flora D. Salim, Yiran Chen
Our experiments on CelebA and LFW datasets show that the quality of the reconstructed images from the obfuscated features of the raw image is dramatically decreased from 0. 9458 to 0. 3175 in terms of multi-scale structural similarity.
no code implementations • ECCV 2020 • Xucheng Ye, Pengcheng Dai, Junyu Luo, Xin Guo, Yingjie Qi, Jianlei Yang, Yiran Chen
Sparsification is an efficient approach to accelerate CNN inference, but it is challenging to take advantage of sparsity in training procedure because the involved gradients are dynamically changed.
no code implementations • 5 Jul 2019 • Zichen Fan, Ziru Li, Bing Li, Yiran Chen, Hai, Li
Deconvolution has been widespread in neural networks.
1 code implementation • 19 Jun 2019 • Hsin-Pai Cheng, Tunhou Zhang, Yukun Yang, Feng Yan, Shi-Yu Li, Harris Teague, Hai Li, Yiran Chen
Designing neural architectures for edge devices is subject to constraints of accuracy, inference latency, and computational cost.
1 code implementation • ICLR 2020 • Wei Wen, Feng Yan, Yiran Chen, Hai Li
Our AutoGrow is efficient.
no code implementations • 3 Jun 2019 • Runze Liu, Jianlei Yang, Yiran Chen, Weisheng Zhao
Simultaneous Localization and Mapping (SLAM) is a critical task for autonomous navigation.
1 code implementation • 28 May 2019 • Matthew Inkawhich, Yiran Chen, Hai Li
In these snooping threat models, the adversary does not have the ability to interact with the target agent's environment, and can only eavesdrop on the action and reward signals being exchanged between agent and environment.
no code implementations • ICLR 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li
With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment.
no code implementations • 15 Apr 2019 • Sergei Alyamkin, Matthew Ardi, Alexander C. Berg, Achille Brighton, Bo Chen, Yiran Chen, Hsin-Pai Cheng, Zichen Fan, Chen Feng, Bo Fu, Kent Gauen, Abhinav Goel, Alexander Goncharenko, Xuyang Guo, Soonhoi Ha, Andrew Howard, Xiao Hu, Yuanjun Huang, Donghyun Kang, Jaeyoun Kim, Jong Gook Ko, Alexander Kondratyev, Junhyeok Lee, Seungjae Lee, Suwoong Lee, Zichao Li, Zhiyu Liang, Juzheng Liu, Xin Liu, Yang Lu, Yung-Hsiang Lu, Deeptanshu Malik, Hong Hanh Nguyen, Eunbyung Park, Denis Repin, Liang Shen, Tao Sheng, Fei Sun, David Svitov, George K. Thiruvathukal, Baiwu Zhang, Jingchi Zhang, Xiaopeng Zhang, Shaojie Zhuo
In addition to mobile phones, many autonomous systems rely on visual data for making decisions and some of these systems have limited energy (such as unmanned aerial vehicles also called drones and mobile robots).
no code implementations • 12 Mar 2019 • Chen Feng, Tao Sheng, Zhiyu Liang, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, Matthew Ardi, Alexander C. Berg, Yiran Chen, Bo Chen, Kent Gauen, Yung-Hsiang Lu
The IEEE Low-Power Image Recognition Challenge (LPIRC) is an annual competition started in 2015 that encourages joint hardware and software solutions for computer vision systems with low latency and power.
no code implementations • 7 Jan 2019 • Linghao Song, Jiachen Mao, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen
In this paper, inspired by recent work in machine learning systems, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators.
no code implementations • 6 Dec 2018 • Jingyang Zhang, Hsin-Pai Cheng, Chunpeng Wu, Hai Li, Yiran Chen
We intuitively and empirically prove the rationality of our method in reducing the search space.
1 code implementation • 6 Dec 2018 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.
no code implementations • ICLR 2019 • Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li
The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.
no code implementations • 27 Nov 2018 • Hsin-Pai Cheng, Patrick Yu, Haojing Hu, Feng Yan, Shi-Yu Li, Hai Li, Yiran Chen
Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time.
1 code implementation • NIPS Workshop CDNNRIA 2018 • Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei HUANG, Feng Yan, Hai Li, Yiran Chen
Thus judiciously selecting different precision for different layers/structures can potentially produce more efficient models compared to traditional quantization methods by striking a better balance between accuracy and compression rate.
no code implementations • 3 Oct 2018 • Sergei Alyamkin, Matthew Ardi, Achille Brighton, Alexander C. Berg, Yiran Chen, Hsin-Pai Cheng, Bo Chen, Zichen Fan, Chen Feng, Bo Fu, Kent Gauen, Jongkook Go, Alexander Goncharenko, Xuyang Guo, Hong Hanh Nguyen, Andrew Howard, Yuanjun Huang, Donghyun Kang, Jaeyoun Kim, Alexander Kondratyev, Seungjae Lee, Suwoong Lee, Junhyeok Lee, Zhiyu Liang, Xin Liu, Juzheng Liu, Zichao Li, Yang Lu, Yung-Hsiang Lu, Deeptanshu Malik, Eunbyung Park, Denis Repin, Tao Sheng, Liang Shen, Fei Sun, David Svitov, George K. Thiruvathukal, Baiwu Zhang, Jingchi Zhang, Xiaopeng Zhang, Shaojie Zhuo
The Low-Power Image Recognition Challenge (LPIRC, https://rebootingcomputing. ieee. org/lpirc) is an annual competition started in 2015.
no code implementations • NeurIPS 2018 • Chaosheng Dong, Yiran Chen, Bo Zeng
Inverse optimization is a powerful paradigm for learning preferences and restrictions that explain the behavior of a decision maker, based on a set of external signal and the corresponding decision pairs.
1 code implementation • 20 Sep 2018 • Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li
When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.
no code implementations • 6 Sep 2018 • Chuhan Min, Aosen Wang, Yiran Chen, Wenyao Xu, Xin Chen
To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely \textit{2PFPCE}, to compress the CNN models and reduce the inference time with marginal performance degradation.
1 code implementation • 5 Jun 2018 • Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, Yiran Chen
Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.
1 code implementation • 21 May 2018 • Wei Wen, Yandan Wang, Feng Yan, Cong Xu, Chunpeng Wu, Yiran Chen, Hai Li
It becomes an open question whether escaping sharp minima can improve the generalization.
no code implementations • 8 Feb 2018 • Jianlei Yang, Xueyan Wang, Qiang Zhou, Zhaohao Wang, Hai, Li, Yiran Chen, Weisheng Zhao
Circuit obfuscation is a frequently used approach to conceal logic functionalities in order to prevent reverse engineering attacks on fabricated chips.
Emerging Technologies Cryptography and Security
no code implementations • 3 Nov 2017 • Xiaotao Jia, Jianlei Yang, Zhaohao Wang, Yiran Chen, Hai, Li, Weisheng Zhao
Bayesian inference is an effective approach for solving statistical learning problems especially with uncertainty and incompleteness.
no code implementations • ICLR 2018 • Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li
This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs.
no code implementations • 21 Aug 2017 • Linghao Song, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen
GRAPHR gains a speedup of 1. 16x to 4. 12x, and is 3. 67x to 10. 96x more energy efficiency compared to PIM-based architecture.
Distributed, Parallel, and Cluster Computing Hardware Architecture
no code implementations • 27 May 2017 • Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen
Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.
1 code implementation • NeurIPS 2017 • Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients.
3 code implementations • ICCV 2017 • Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.
no code implementations • CVPR 2017 • Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, Hai Li
Our DNN has 4. 1M parameters, which is only 6. 7% of AlexNet or 59% of GoogLeNet.
no code implementations • 3 Mar 2017 • Chaofei Yang, Qing Wu, Hai Li, Yiran Chen
A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.
3 code implementations • NeurIPS 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation.
1 code implementation • 4 Aug 2016 • Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey
Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels.
no code implementations • 3 Apr 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Kent Nixon, Qing Wu, Mark Barnell, Hai Li, Yiran Chen
IBM TrueNorth chip uses digital spikes to perform neuromorphic computing and achieves ultrahigh execution parallelism and power efficiency.