1 code implementation • 7 Jan 2023 • Xinyi Zhou, Jiayu Li, Qinzhou Li, Reza Zafarani
We propose the hierarchical recursive neural network (HERO) to predict fake news by learning its linguistic style, which is distinguishable from the truth, as psychological theories reveal.
no code implementations • 4 Dec 2022 • Yirong Zhou, Chen Qian, Jiayu Li, Zi Wang, Yu Hu, Biao Qu, Liuhong Zhu, Jianjun Zhou, Taishan Kang, Jianzhong Lin, Qing Hong, Jiyang Dong, Di Guo, Xiaobo Qu
Efficient collaboration between engineers and radiologists is important for image reconstruction algorithm development and image quality evaluation in magnetic resonance imaging (MRI).
no code implementations • 4 Oct 2020 • Bo Peng, Jiayu Li, Selahattin Akkas, Fugang Wang, Takuya Araki, Ohno Yoshiyuki, Judy Qiu
Forecasting is challenging since uncertainty resulted from exogenous factors exists.
no code implementations • 17 Aug 2020 • Weiping Shi, Jiayu Li, Guiyang Xia, Yuntian Wang, Xiaobo Zhou, Yonghui Zhang, Feng Shu
This paper considers a secure multigroup multicast multiple-input single-output (MISO) communication system aided by an intelligent reflecting surface (IRS).
1 code implementation • 31 Dec 2018 • Ao Ren, Tianyun Zhang, Shaokai Ye, Jiayu Li, Wenyao Xu, Xuehai Qian, Xue Lin, Yanzhi Wang
The first part of ADMM-NN is a systematic, joint framework of DNN weight pruning and quantization using ADMM.
no code implementations • 5 Nov 2018 • Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Jiaming Xie, Yun Liang, Sijia Liu, Xue Lin, Yanzhi Wang
Both DNN weight pruning and clustering/quantization, as well as their combinations, can be solved in a unified manner.
no code implementations • ICLR 2019 • Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang
Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates.
no code implementations • 28 Mar 2018 • Caiwen Ding, Ao Ren, Geng Yuan, Xiaolong Ma, Jiayu Li, Ning Liu, Bo Yuan, Yanzhi Wang
For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10.
no code implementations • 14 Mar 2018 • Yanzhi Wang, Zheng Zhan, Jiayu Li, Jian Tang, Bo Yuan, Liang Zhao, Wujie Wen, Siyue Wang, Xue Lin
Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity.