Search Results for author: Tianli Zhao

Found 5 papers, 3 papers with code

Understanding and Improving Deep Graph Neural Networks: A Probabilistic Graphical Model Perspective

no code implementations25 Jan 2023 Jiayuan Chen, Xiang Zhang, Yinfei Xu, Tianli Zhao, Renjie Xie, Wei Xu

Given the fixed point equation (FPE) derived from the variational inference on the Markov random fields, the deep GNNs, including JKNet, GCNII, DGCN, and the classical GNNs, such as GCN, GAT, and APPNP, can be regarded as different approximations of the FPE.

Variational Inference

Soft Threshold Ternary Networks

1 code implementation4 Apr 2022 Weixiang Xu, Xiangyu He, Tianli Zhao, Qinghao Hu, Peisong Wang, Jian Cheng

The latest STTN shows that ResNet-18 with ternary weights and ternary activations achieves up to 68. 2% Top-1 accuracy on ImageNet.

Quantization

APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

1 code implementation CVPR 2022 Jiahao Lu, Xi Sheryl Zhang, Tianli Zhao, Xiangyu He, Jian Cheng

Showing how vision Transformers are at the risk of privacy leakage via gradients, we urge the significance of designing privacy-safer Transformer models and defending schemes.

Federated Learning

Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices

1 code implementation15 Oct 2021 Tianli Zhao, Xi Sheryl Zhang, Wentao Zhu, Jiaxing Wang, Sen yang, Ji Liu, Jian Cheng

In this paper, we present a unified framework with Joint Channel pruning and Weight pruning (JCW), and achieves a better Pareto-frontier between the latency and accuracy than previous model compression approaches.

Model Compression

Architecture Aware Latency Constrained Sparse Neural Networks

no code implementations1 Sep 2021 Tianli Zhao, Qinghao Hu, Xiangyu He, Weixiang Xu, Jiaxing Wang, Cong Leng, Jian Cheng

Acceleration of deep neural networks to meet a specific latency constraint is essential for their deployment on mobile devices.

Cannot find the paper you are looking for? You can Submit a new open access paper.