Search Results for author: Chong Yu

Found 15 papers, 6 papers with code

Integrated Steganography and Steganalysis with Generative Adversarial Networks

no code implementations ICLR 2019 Chong Yu

The discriminative model simulate the steganalysis process, which can help us understand the sensitivity of cover images to semantic changes.

Generative Adversarial Network Steganalysis

Communication-Efficient Hybrid Federated Learning for E-health with Horizontal and Vertical Data Partitioning

no code implementations15 Apr 2024 Chong Yu, Shuaiqi Shen, Shiqiang Wang, Kuan Zhang, Hai Zhao

In this paper, we provide a thorough study on an effective integration of HFL and VFL, to achieve communication efficiency and overcome the above limitations when data is both horizontally and vertically partitioned.

Vertical Federated Learning

Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression

1 code implementation23 Mar 2024 Hancheng Ye, Chong Yu, Peng Ye, Renqiu Xia, Yansong Tang, Jiwen Lu, Tao Chen, Bo Zhang

Recent Vision Transformer Compression (VTC) works mainly follow a two-stage scheme, where the importance score of each model unit is first evaluated or preset in each submodule, followed by the sparsity score evaluation according to the target sparsity constraint.

Dimensionality Reduction

Enhanced Sparsification via Stimulative Training

no code implementations11 Mar 2024 Shengji Tang, Weihao Lin, Hancheng Ye, Peng Ye, Chong Yu, Baopu Li, Tao Chen

To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation.

Knowledge Distillation Model Compression

MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer

1 code implementation5 Mar 2024 JianJian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, Tao Chen

To this end, we propose a novel framework named Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for accelerating various VLTs.

Efficient Architecture Search via Bi-level Data Pruning

no code implementations21 Dec 2023 Chongjun Tu, Peng Ye, Weihao Lin, Hancheng Ye, Chong Yu, Tao Chen, Baopu Li, Wanli Ouyang

Improving the efficiency of Neural Architecture Search (NAS) is a challenging but significant task that has received much attention.

Neural Architecture Search

SpVOS: Efficient Video Object Segmentation with Triple Sparse Convolution

no code implementations23 Oct 2023 Weihao Lin, Tao Chen, Chong Yu

Therefore, we propose a sparse baseline of VOS named SpVOS in this work, which develops a novel triple sparse convolution to reduce the computation costs of the overall VOS framework.

Object Semantic Segmentation +2

Boosting Residual Networks with Group Knowledge

1 code implementation26 Aug 2023 Shengji Tang, Peng Ye, Baopu Li, Weihao Lin, Tao Chen, Tong He, Chong Yu, Wanli Ouyang

Specifically, we implicitly divide all subnets into hierarchical groups by subnet-in-subnet sampling, aggregate the knowledge of different subnets in each group during training, and exploit upper-level group knowledge to supervise lower-level subnet groups.

Knowledge Distillation

Boost Vision Transformer with GPU-Friendly Sparsity and Quantization

no code implementations CVPR 2023 Chong Yu, Tao Chen, Zhongxue Gan, Jiayuan Fan

Moreover, GPUSQ-ViT can boost actual deployment performance by 1. 39-1. 79 times and 3. 22-3. 43 times of latency and throughput on A100 GPU, and 1. 57-1. 69 times and 2. 11-2. 51 times improvement of latency and throughput on AGX Orin.

Benchmarking Knowledge Distillation +1

Channel Permutations for N:M Sparsity

1 code implementation NeurIPS 2021 Jeff Pool, Chong Yu

We introduce channel permutations as a method to maximize the accuracy of N:M sparse networks.

Accelerating Sparse Deep Neural Networks

2 code implementations16 Apr 2021 Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, Paulius Micikevicius

We present the design and behavior of Sparse Tensor Cores, which exploit a 2:4 (50%) sparsity pattern that leads to twice the math throughput of dense matrix units.

Math

Self-Supervised Generative Adversarial Compression

no code implementations NeurIPS 2020 Chong Yu, Jeff Pool

Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters.

Image Classification Knowledge Distillation +1

Self-Supervised GAN Compression

1 code implementation3 Jul 2020 Chong Yu, Jeff Pool

Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters.

Image Classification Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.