1 code implementation • 5 Jun 2018 • Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, Yiran Chen
Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.
1 code implementation • 8 Oct 2021 • Linghao Song, Yuze Chi, Jason Cong
In this work, we present PYXIS, a performance dataset for specialized accelerators on sparse data.
no code implementations • 7 Jan 2017 • Yandan Wang, Wei Wen, Linghao Song, Hai Li
Brain inspired neuromorphic computing has demonstrated remarkable advantages over traditional von Neumann architecture for its high energy efficiency and parallel data processing.
no code implementations • 7 Jan 2019 • Linghao Song, Jiachen Mao, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen
In this paper, inspired by recent work in machine learning systems, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators.
no code implementations • 2 Feb 2019 • Linghao Song, Fan Chen, Steven R. Young, Catherine D. Schuman, Gabriel Perdue, Thomas E. Potok
We present a deep learning approach for vertex reconstruction of neutrino-nucleus interaction events, a problem in the domain of high energy physics.
no code implementations • 21 Aug 2017 • Linghao Song, Youwei Zhuo, Xuehai Qian, Hai Li, Yiran Chen
GRAPHR gains a speedup of 1. 16x to 4. 12x, and is 3. 67x to 10. 96x more energy efficiency compared to PIM-based architecture.
Distributed, Parallel, and Cluster Computing Hardware Architecture
no code implementations • 21 Jul 2020 • Pengcheng Dai, Jianlei Yang, Xucheng Ye, Xingzhou Cheng, Junyu Luo, Linghao Song, Yiran Chen, Weisheng Zhao
In this paper, \textit{SparseTrain} is proposed to accelerate CNN training by fully exploiting the sparsity.