no code implementations • 16 Mar 2023 • Xuzhe Zhang, Yuhao Wu, Jia Guo, Jerod M. Rasmussen, Thomas G. O'Connor, Hyagriv N. Simhan, Sonja Entringer, Pathik D. Wadhwa, Claudia Buss, Cristiane S. Duarte, Andrea Jackowski, Hai Li, Jonathan Posner, Andrew F. Laine, Yun Wang
Robust segmentation of infant brain MRI across multiple ages, modalities, and sites remains challenging due to the intrinsic heterogeneity caused by different MRI scanners, vendors, or acquisition sequences, as well as varying stages of neurodevelopment.
1 code implementation • 8 Feb 2023 • Eric Yeats, Frank Liu, Hai Li
Disentangled learning representations have promising utility in many applications, but they currently suffer from serious reliability issues.
no code implementations • 31 Jan 2023 • Xin Dong, Ruize Wu, Chao Xiong, Hai Li, Lei Cheng, Yong He, Shiyou Qian, Jian Cao, Linjian Mo
GDOD decomposes gradients into task-shared and task-conflict components explicitly and adopts a general update rule for avoiding interference across all task gradients.
no code implementations • 18 Jan 2023 • Jingchi Zhang, Huanrui Yang, Hai Li
We propose a new prespective on exploring the intrinsic diversity within a model architecture to build efficient DNN ensemble.
1 code implementation • 28 Nov 2022 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data.
no code implementations • 11 Nov 2022 • Yuewei Yang, Jingwei Sun, Ang Li, Hai Li, Yiran Chen
In this work, we propose a novel method, FedStyle, to learn a more generalized global model by infusing local style information with local content information for contrastive learning, and to learn more personalized local models by inducing local style information for downstream tasks.
no code implementations • 6 Nov 2022 • Jixun Yao, Yi Lei, Qing Wang, Pengcheng Guo, Ziqian Ning, Lei Xie, Hai Li, Junhui Liu, Danming Xie
Background sound is an informative form of art that is helpful in providing a more immersive experience in real-application voice conversion (VC) scenarios.
1 code implementation • 28 Oct 2022 • Xingrui Yang, Hai Li, Hongjia Zhai, Yuhang Ming, Yuqian Liu, Guofeng Zhang
In this work, we present a dense tracking and mapping system named Vox-Fusion, which seamlessly fuses neural implicit representations with traditional volumetric fusion methods.
no code implementations • 2 Oct 2022 • Jörg Henkel, Hai Li, Anand Raghunathan, Mehdi B. Tahoori, Swagath Venkataramani, Xiaoxuan Yang, Georgios Zervakis
In this work, we enlighten the synergistic nature of AxC and ML and elucidate the impact of AxC in designing efficient ML systems.
no code implementations • 30 Sep 2022 • Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li
Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees.
1 code implementation • 21 Sep 2022 • Eric Yeats, Frank Liu, David Womble, Hai Li
We present a self-supervised method to disentangle factors of variation in high-dimensional data that does not rely on prior knowledge of the underlying variation profile (e. g., no assumptions on the number or distribution of the individual latent variables to be extracted).
no code implementations • 9 Sep 2022 • Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen
Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.
no code implementations • 8 Sep 2022 • Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen
Adversarial Training (AT) has been proven to be an effective method of introducing strong adversarial robustness into deep neural networks.
no code implementations • 26 Aug 2022 • Ximing Qiao, Hai Li
We consider learning and compositionality as the key mechanisms towards simulating human-like intelligence.
no code implementations • 23 Aug 2022 • Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen
Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.
no code implementations • 21 Aug 2022 • Hai Li, Xingrui Yang, Hongjia Zhai, Yuqian Liu, Hujun Bao, Guofeng Zhang
Virtual content creation and interaction play an important role in modern 3D applications such as AR and VR.
2 code implementations • 14 Jul 2022 • Tunhou Zhang, Dehua Cheng, Yuchen He, Zhengxing Chen, Xiaoliang Dai, Liang Xiong, Feng Yan, Hai Li, Yiran Chen, Wei Wen
To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i. e., search space) to search the full architectures.
1 code implementation • 21 Feb 2022 • Jingyang Zhang, Yiran Chen, Hai Li
Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks.
no code implementations • 2 Jan 2022 • Wendong Gan, Bolong Wen, Ying Yan, Haitao Chen, Zhichao Wang, Hongqiang Du, Lei Xie, Kaixuan Guo, Hai Li
Specifically, prosody vector is first extracted from pre-trained VQ-Wav2Vec model, where rich prosody information is embedded while most speaker and environment information are removed effectively by quantization.
1 code implementation • NeurIPS 2021 • Jingwei Sun, Ang Li, Louis DiValentin, Amin Hassanzadeh, Yiran Chen, Hai Li
Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC.
no code implementations • 23 Oct 2021 • Chang Song, Riya Ranjan, Hai Li
After quantization, the cost can be greatly saved, and the quantized models are more hardware friendly with acceptable accuracy loss.
no code implementations • 10 Oct 2021 • Huanrui Yang, Hongxu Yin, Pavlo Molchanov, Hai Li, Jan Kautz
On ImageNet-1K, we prune the DEIT-Base (Touvron et al., 2021) model to a 2. 6x FLOPs reduction, 5. 1x parameter reduction, and 1. 9x run-time speedup with only 0. 07% loss in accuracy.
no code implementations • 29 Sep 2021 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power.
no code implementations • 3 Jul 2021 • Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, Hai Li
Existing representation learning methods on graphs have achieved state-of-the-art performance on various graph-related tasks such as node classification, link prediction, etc.
1 code implementation • CVPR 2021 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.
no code implementations • 16 Jun 2021 • Zhichao Wang, Xinyong Zhou, Fengyu Yang, Tao Li, Hongqiang Du, Lei Xie, Wendong Gan, Haitao Chen, Hai Li
Specifically, prosodic features are used to explicit model prosody, while VAE and reference encoder are used to implicitly model prosody, which take Mel spectrum and bottleneck feature as input respectively.
1 code implementation • 7 Jun 2021 • Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li
We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.
Medical Image Classification
Out-of-Distribution Detection
+1
no code implementations • 21 Apr 2021 • Haowen Fang, Brady Taylor, Ziru Li, Zaidao Mei, Hai Li, Qinru Qiu
This circuit implementation of the neuron model is simulated to demonstrate its ability to react to temporal spiking patterns with an adaptive threshold.
no code implementations • 5 Apr 2021 • Qicong Xie, Xiaohai Tian, Guanghou Liu, Kun Song, Lei Xie, Zhiyong Wu, Hai Li, Song Shi, Haizhou Li, Fen Hong, Hui Bu, Xin Xu
The challenge consists of two tracks, namely few-shot track and one-shot track, where the participants are required to clone multiple target voices with 100 and 5 samples respectively.
no code implementations • CVPR 2022 • Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li, Yiran Chen
In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.
no code implementations • 17 Mar 2021 • Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen
Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.
no code implementations • 17 Mar 2021 • Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen
During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.
no code implementations • 2 Mar 2021 • Eren Kurshan, Hai Li, Mingoo Seok, Yuan Xie
Over the last decade, artificial intelligence has found many applications areas in the society.
1 code implementation • ICLR 2021 • Huanrui Yang, Lin Duan, Yiran Chen, Hai Li
Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated.
no code implementations • 19 Jan 2021 • Ximing Qiao, Yuhua Bai, Siping Hu, Ang Li, Yiran Chen, Hai Li
The framework shows that the subset selection process, a deciding factor for subset aggregation methods, can be viewed as a code design problem.
no code implementations • 29 Dec 2020 • Chang Song, Elias Fallon, Hai Li
Neural networks are getting deeper and more computation-intensive nowadays.
4 code implementations • 8 Dec 2020 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
no code implementations • 8 Dec 2020 • Binghui Wang, Ang Li, Hai Li, Yiran Chen
However, existing FL methods 1) perform poorly when data across clients are non-IID, 2) cannot handle data with new label domains, and 3) cannot leverage unlabeled data, while all these issues naturally happen in real-world graph-based problems.
no code implementations • 30 Nov 2020 • Hsin-Pai Cheng, Feng Liang, Meng Li, Bowen Cheng, Feng Yan, Hai Li, Vikas Chandra, Yiran Chen
We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation.
Ranked #5 on
Multi-Person Pose Estimation
on COCO test-dev
no code implementations • 26 Nov 2020 • Zhiyao Xie, Hai Li, Xiaoqing Xu, Jiang Hu, Yiran Chen
IR drop constraint is a fundamental requirement enforced in almost all chip designs.
3 code implementations • NeurIPS 2020 • Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, Hai Li
The process is hard, often requires models with large capacity, and suffers from significant loss on clean data accuracy.
no code implementations • 1 Sep 2020 • Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Cai Fu, Hai Li, Yiran Chen
Next, we reformulate the evasion attack against GNNs to be related to calculating label influence on LP, which is applicable to multi-layer GNNs and does not need to know the GNN model.
no code implementations • 1 Sep 2020 • Houxiang Fan, Binghui Wang, Pan Zhou, Ang Li, Meng Pang, Zichuan Xu, Cai Fu, Hai Li, Yiran Chen
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc.
1 code implementation • 7 Aug 2020 • Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, Hai Li
Rather than learning a shared global model in classic federated learning, each client learns a personalized model via LotteryFL; the communication cost can be significantly reduced due to the compact size of lottery networks.
no code implementations • 8 Jul 2020 • Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen
To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.
no code implementations • 12 Jun 2020 • Chaofei Yang, Lei Ding, Yiran Chen, Hai Li
On the one hand, the quality of the synthesized faces is reduced with more visual artifacts such that the synthesized faces are more obviously fake or less convincing to human observers.
1 code implementation • ICML 2020 • Shi-Yu Li, Edward Hanson, Hai Li, Yiran Chen
Although state-of-the-art (SOTA) CNNs achieve outstanding performance on various tasks, their high computation demand and massive number of parameters make it difficult to deploy these SOTA CNNs onto resource-constrained devices.
1 code implementation • 20 Apr 2020 • Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen
In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
2 code implementations • ECCV 2020 • Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans
First we train N random architectures to generate N (architecture, validation accuracy) pairs and use them to train a regression model that predicts accuracy based on the architecture.
1 code implementation • 21 Nov 2019 • Tunhou Zhang, Hsin-Pai Cheng, Zhenwen Li, Feng Yan, Chengyu Huang, Hai Li, Yiran Chen
Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1. 5 GPU hours, which is 7. 2x and 6. 7x faster than the crafting time of SOTA CNN and RNN models, respectively.
no code implementations • 25 Oct 2019 • Jingchi Zhang, Jonathan Huang, Michael Deisher, Hai Li, Yiran Chen
Recently, deep neural networks (DNN) have been widely used in speaker recognition area.
1 code implementation • NeurIPS 2019 • Ximing Qiao, Yukun Yang, Hai Li
An original trigger used by an attacker to build the backdoored model represents only a point in the space.
no code implementations • 25 Sep 2019 • Chunpeng Wu, Wei Wen, Yiran Chen, Hai Li
As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images.
1 code implementation • 18 Sep 2019 • Jingyang Zhang, Huanrui Yang, Fan Chen, Yitu Wang, Hai Li
However, the power hungry analog-to-digital converters (ADCs) prevent the practical deployment of ReRAM-based DNN accelerators on end devices with limited chip area and power budget.
1 code implementation • 17 Sep 2019 • Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li
When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.
no code implementations • 13 Sep 2019 • Qing Yang, Jiachen Mao, Zuoguan Wang, Hai Li
In addition to conventional compression techniques, e. g., weight pruning and quantization, removing unimportant activations can reduce the amount of data communication and the computation cost.
no code implementations • 12 Sep 2019 • Chang Song, Zuoguan Wang, Hai Li
Recent research studies revealed that neural networks are vulnerable to adversarial attacks.
1 code implementation • ICLR 2020 • Huanrui Yang, Wei Wen, Hai Li
Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.
no code implementations • 19 Jun 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Hai Li
With the rapid scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for improving deployment efficiency.
1 code implementation • 19 Jun 2019 • Hsin-Pai Cheng, Tunhou Zhang, Yukun Yang, Feng Yan, Shi-Yu Li, Harris Teague, Hai Li, Yiran Chen
Designing neural architectures for edge devices is subject to constraints of accuracy, inference latency, and computational cost.
1 code implementation • ICLR 2020 • Wei Wen, Feng Yan, Yiran Chen, Hai Li
Our AutoGrow is efficient.
1 code implementation • 28 May 2019 • Matthew Inkawhich, Yiran Chen, Hai Li
In these snooping threat models, the adversary does not have the ability to interact with the target agent's environment, and can only eavesdrop on the action and reward signals being exchanged between agent and environment.
no code implementations • ICLR 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li
With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment.
no code implementations • 7 Jan 2019 • Linghao Song, Jiachen Mao, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen
In this paper, inspired by recent work in machine learning systems, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators.
no code implementations • 6 Dec 2018 • Jingyang Zhang, Hsin-Pai Cheng, Chunpeng Wu, Hai Li, Yiran Chen
We intuitively and empirically prove the rationality of our method in reducing the search space.
no code implementations • ICLR 2019 • Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li
The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.
no code implementations • 27 Nov 2018 • Hsin-Pai Cheng, Patrick Yu, Haojing Hu, Feng Yan, Shi-Yu Li, Hai Li, Yiran Chen
Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time.
1 code implementation • NIPS Workshop CDNNRIA 2018 • Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei HUANG, Feng Yan, Hai Li, Yiran Chen
Thus judiciously selecting different precision for different layers/structures can potentially produce more efficient models compared to traditional quantization methods by striking a better balance between accuracy and compression rate.
1 code implementation • 20 Sep 2018 • Juncheng Shen, Juzheng Liu, Yiran Chen, Hai Li
When using MoLe for VGG-16 network on CIFAR dataset, the computational overhead is only 9% and the data transmission overhead is 5. 12%.
1 code implementation • 5 Jun 2018 • Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, Yiran Chen
Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.
1 code implementation • 21 May 2018 • Wei Wen, Yandan Wang, Feng Yan, Cong Xu, Chunpeng Wu, Yiran Chen, Hai Li
It becomes an open question whether escaping sharp minima can improve the generalization.
no code implementations • ICLR 2018 • Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li
This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs.
no code implementations • 21 Aug 2017 • Linghao Song, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen
GRAPHR gains a speedup of 1. 16x to 4. 12x, and is 3. 67x to 10. 96x more energy efficiency compared to PIM-based architecture.
Distributed, Parallel, and Cluster Computing Hardware Architecture
no code implementations • 27 May 2017 • Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen
Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.
1 code implementation • NeurIPS 2017 • Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients.
3 code implementations • ICCV 2017 • Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.
no code implementations • CVPR 2017 • Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, Hai Li
Our DNN has 4. 1M parameters, which is only 6. 7% of AlexNet or 59% of GoogLeNet.
no code implementations • 3 Mar 2017 • Chaofei Yang, Qing Wu, Hai Li, Yiran Chen
A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.
no code implementations • 11 Feb 2017 • Yandan Wang, Wei Wen, Beiye Liu, Donald Chiarulli, Hai Li
Following rank clipping, group connection deletion further reduces the routing area of LeNet and ConvNet to 8. 1\% and 52. 06\%, respectively.
no code implementations • 7 Jan 2017 • Yandan Wang, Wei Wen, Linghao Song, Hai Li
Brain inspired neuromorphic computing has demonstrated remarkable advantages over traditional von Neumann architecture for its high energy efficiency and parallel data processing.
3 code implementations • NeurIPS 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation.
1 code implementation • 4 Aug 2016 • Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey
Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels.
no code implementations • 3 Apr 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Kent Nixon, Qing Wu, Mark Barnell, Hai Li, Yiran Chen
IBM TrueNorth chip uses digital spikes to perform neuromorphic computing and achieves ultrahigh execution parallelism and power efficiency.