no code implementations • ICML 2020 • Liu Liu, Lei Deng, Zhaodong Chen, yuke wang, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie
Using Deep Neural Networks (DNNs) in machine learning tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements and energy constraints because of the memory-bound and the compute-bound execution pattern of DNNs.
1 code implementation • 23 Sep 2022 • Boyuan Feng, Tianqi Tang, yuke wang, Zhaodong Chen, Zheng Wang, Shu Yang, Yuan Xie, Yufei Ding
In this paper, we propose Faith, an efficient framework for transformer verification on GPUs.
no code implementations • 28 Feb 2022 • Zhaodong Chen, Yuying Quan, Zheng Qu, Liu Liu, Yufei Ding, Yuan Xie
We evaluate the 1:2 and 2:4 sparsity under different configurations and achieve 1. 27~ 1. 89x speedups over the full-attention mechanism.
no code implementations • 21 Oct 2021 • Liu Liu, Zheng Qu, Zhaodong Chen, Yufei Ding, Yuan Xie
We demonstrate that the sparse patterns are dynamic, depending on input sequences.
no code implementations • 29 Sep 2021 • Zhaodong Chen, Liu Liu, Yuying Quan, Zheng Qu, Yufei Ding, Yuan Xie
Transformers are becoming mainstream solutions for various tasks like NLP and Computer vision.
no code implementations • 25 Jul 2021 • Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie
Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks.
no code implementations • 1 Jan 2021 • Zhaodong Chen, Zhao WeiQin, Lei Deng, Guoqi Li, Yuan Xie
Moreover, analysis on the activation's mean in the forward pass reveals that the self-normalization property gets weaker with larger fan-in of each layer, which explains the performance degradation on large benchmarks like ImageNet.
1 code implementation • 1 Jan 2020 • Zhaodong Chen, Lei Deng, Bangyan Wang, Guoqi Li, Yuan Xie
Powered by our metric and framework, we analyze extensive initialization, normalization, and network structures.
no code implementations • ICLR 2019 • Zhaodong Chen, Lei Deng, Guoqi Li, Jiawei Sun, Xing Hu, Ling Liang, YufeiDing, Yuan Xie
We identify that the effectiveness expects less data correlation while the efficiency expects regular execution pattern.
2 code implementations • CVPR 2019 • Wenzhao Zheng, Zhaodong Chen, Jiwen Lu, Jie zhou
This paper presents a hardness-aware deep metric learning (HDML) framework.
Ranked #30 on
Metric Learning
on CUB-200-2011
(using extra training data)
no code implementations • 25 Oct 2018 • Zhaodong Chen, Lei Deng, Guoqi Li, Jiawei Sun, Xing Hu, Xin Ma, Yuan Xie
In this paper, we propose alleviating this problem through sampling only a small fraction of data for normalization at each iteration.