Search Results for author: Zhenhua Liu

Found 17 papers, 8 papers with code

GhostNetV3: Exploring the Training Strategies for Compact Models

no code implementations17 Apr 2024 Zhenhua Liu, Zhiwei Hao, Kai Han, Yehui Tang, Yunhe Wang

In this paper, by systematically investigating the impact of different training ingredients, we introduce a strong training strategy for compact models.

DiffusionDialog: A Diffusion Model for Diverse Dialog Generation with Latent Space

no code implementations10 Apr 2024 Jianxiang Xiang, Zhenhua Liu, Haodong Liu, Yin Bai, Jia Cheng, Wenliang Chen

Previous studies attempted to introduce discrete or Gaussian-based continuous latent variables to address the one-to-many problem, but the diversity is limited.

Denoising Dialogue Generation

Controllable and Diverse Data Augmentation with Large Language Model for Low-Resource Open-Domain Dialogue Generation

no code implementations30 Mar 2024 Zhenhua Liu, Tong Zhu, Jianxiang Xiang, Wenliang Chen

To evaluate the efficacy of data augmentation methods for open-domain dialogue, we designed a clustering-based metric to characterize the semantic diversity of the augmented dialogue data.

Data Augmentation Dialogue Generation +2

A Dual-level Detection Method for Video Copy Detection

1 code implementation21 May 2023 Tianyi Wang, Feipeng Ma, Zhenhua Liu, Fengyun Rao

With the development of multimedia technology, Video Copy Detection has been a crucial problem for social media platforms.

Copy Detection Partial Video Copy Detection +2

Instance-Aware Dynamic Neural Network Quantization

4 code implementations CVPR 2022 Zhenhua Liu, Yunhe Wang, Kai Han, Siwei Ma, Wen Gao

However, natural images are of huge diversity with abundant content and using such a universal quantization configuration for all samples is not an optimal strategy.

Quantization

LICHEE: Improving Language Model Pre-training with Multi-grained Tokenization

1 code implementation Findings (ACL) 2021 Weidong Guo, Mingjun Zhao, Lusheng Zhang, Di Niu, Jinwen Luo, Zhenhua Liu, Zhenyang Li, Jianbo Tang

Language model pre-training based on large corpora has achieved tremendous success in terms of constructing enriched contextual representations and has led to significant performance gains on a diverse range of Natural Language Understanding (NLU) tasks.

Language Modelling Natural Language Understanding

Post-Training Quantization for Vision Transformer

no code implementations NeurIPS 2021 Zhenhua Liu, Yunhe Wang, Kai Han, Siwei Ma, Wen Gao

Recently, transformer has achieved remarkable performance on a variety of computer vision applications.

Quantization

GhostSR: Learning Ghost Features for Efficient Image Super-Resolution

4 code implementations21 Jan 2021 Ying Nie, Kai Han, Zhenhua Liu, Chuanjian Liu, Yunhe Wang

Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i. e., ghost features).

Image Super-Resolution

A Survey on Visual Transformer

no code implementations23 Dec 2020 Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, DaCheng Tao

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism.

Image Classification Inductive Bias

Pre-Trained Image Processing Transformer

6 code implementations CVPR 2021 Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao

To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.

 Ranked #1 on Single Image Deraining on Rain100L (using extra training data)

Color Image Denoising Contrastive Learning +2

Knowledge Transfer via Student-Teacher Collaboration

no code implementations25 Sep 2019 Tianxiao Gao, Ruiqin Xiong, Zhenhua Liu, Siwei Ma, Feng Wu, Tiejun Huang, Wen Gao

One way to compress these heavy models is knowledge transfer (KT), in which a light student network is trained through absorbing the knowledge from a powerful teacher network.

Transfer Learning

Frequency-Domain Dynamic Pruning for Convolutional Neural Networks

no code implementations NeurIPS 2018 Zhenhua Liu, Jizheng Xu, Xiulian Peng, Ruiqin Xiong

Deep convolutional neural networks have demonstrated their powerfulness in a variety of applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.