no code implementations • ICLR 2019 • Chong Yu
The discriminative model simulate the steganalysis process, which can help us understand the sensitivity of cover images to semantic changes.
no code implementations • 15 Apr 2024 • Chong Yu, Shuaiqi Shen, Shiqiang Wang, Kuan Zhang, Hai Zhao
In this paper, we provide a thorough study on an effective integration of HFL and VFL, to achieve communication efficiency and overcome the above limitations when data is both horizontally and vertically partitioned.
1 code implementation • 23 Mar 2024 • Hancheng Ye, Chong Yu, Peng Ye, Renqiu Xia, Yansong Tang, Jiwen Lu, Tao Chen, Bo Zhang
Recent Vision Transformer Compression (VTC) works mainly follow a two-stage scheme, where the importance score of each model unit is first evaluated or preset in each submodule, followed by the sparsity score evaluation according to the target sparsity constraint.
no code implementations • 11 Mar 2024 • Shengji Tang, Weihao Lin, Hancheng Ye, Peng Ye, Chong Yu, Baopu Li, Tao Chen
To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation.
1 code implementation • 5 Mar 2024 • JianJian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, Tao Chen
To this end, we propose a novel framework named Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for accelerating various VLTs.
no code implementations • 21 Dec 2023 • Chongjun Tu, Peng Ye, Weihao Lin, Hancheng Ye, Chong Yu, Tao Chen, Baopu Li, Wanli Ouyang
Improving the efficiency of Neural Architecture Search (NAS) is a challenging but significant task that has received much attention.
no code implementations • 23 Oct 2023 • Weihao Lin, Tao Chen, Chong Yu
Therefore, we propose a sparse baseline of VOS named SpVOS in this work, which develops a novel triple sparse convolution to reduce the computation costs of the overall VOS framework.
1 code implementation • 26 Aug 2023 • Shengji Tang, Peng Ye, Baopu Li, Weihao Lin, Tao Chen, Tong He, Chong Yu, Wanli Ouyang
Specifically, we implicitly divide all subnets into hierarchical groups by subnet-in-subnet sampling, aggregate the knowledge of different subnets in each group during training, and exploit upper-level group knowledge to supervise lower-level subnet groups.
no code implementations • CVPR 2023 • Chong Yu, Tao Chen, Zhongxue Gan, Jiayuan Fan
Moreover, GPUSQ-ViT can boost actual deployment performance by 1. 39-1. 79 times and 3. 22-3. 43 times of latency and throughput on A100 GPU, and 1. 57-1. 69 times and 2. 11-2. 51 times improvement of latency and throughput on AGX Orin.
no code implementations • 18 May 2023 • Chong Yu, Tao Chen, Zhongxue Gan
Adversarial attack is commonly regarded as a huge threat to neural networks because of misleading behavior.
1 code implementation • NeurIPS 2021 • Jeff Pool, Chong Yu
We introduce channel permutations as a method to maximize the accuracy of N:M sparse networks.
no code implementations • CVPR 2021 • Chong Yu
With the development of deep learning, neural networks tend to be deeper and larger to achieve good performance.
2 code implementations • 16 Apr 2021 • Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, Paulius Micikevicius
We present the design and behavior of Sparse Tensor Cores, which exploit a 2:4 (50%) sparsity pattern that leads to twice the math throughput of dense matrix units.
no code implementations • NeurIPS 2020 • Chong Yu, Jeff Pool
Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters.
1 code implementation • 3 Jul 2020 • Chong Yu, Jeff Pool
Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters.