Search Results for author: Zhenyu Zhang

Found 25 papers, 14 papers with code

Enhancing Chinese Pre-trained Language Model via Heterogeneous Linguistics Graph

1 code implementation ACL 2022 Yanzeng Li, Jiangxia Cao, Xin Cong, Zhenyu Zhang, Bowen Yu, Hongsong Zhu, Tingwen Liu

Chinese pre-trained language models usually exploit contextual character information to learn representations, while ignoring the linguistics knowledge, e. g., word and sentence information.

Language Modelling

Multi-Modal Masked Pre-Training for Monocular Panoramic Depth Completion

no code implementations18 Mar 2022 Zhiqiang Yan, Xiang Li, Kun Wang, Zhenyu Zhang, Jun Li, Jian Yang

Specifically, during pre-training, we simultaneously cover up patches of the panoramic RGB image and sparse depth by shared random mask, then reconstruct the sparse depth in the masked regions.

Depth Completion Transfer Learning

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy

1 code implementation12 Mar 2022 Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

However, a "head-to-toe assessment" regarding the extent of redundancy in ViTs, and how much we could gain by thoroughly mitigating such, has been absent for this field.

Sparsity Winning Twice: Better Robust Generalization from More Efficient Training

1 code implementation ICLR 2022 Tianlong Chen, Zhenyu Zhang, Pengjun Wang, Santosh Balachandra, Haoyu Ma, Zehao Wang, Zhangyang Wang

We introduce two alternatives for sparse adversarial training: (i) static sparsity, by leveraging recent results from the lottery ticket hypothesis to identify critical sparse subnetworks arising from the early training; (ii) dynamic sparsity, by allowing the sparse subnetwork to adaptively adjust its connectivity pattern (while sticking to the same sparsity ratio) throughout training.

ASFD: Automatic and Scalable Face Detector

no code implementations26 Jan 2022 Jian Li, Bin Zhang, Yabiao Wang, Ying Tai, Zhenyu Zhang, Chengjie Wang, Jilin Li, Xiaoming Huang, Yili Xia

Along with current multi-scale based detectors, Feature Aggregation and Enhancement (FAE) modules have shown superior performance gains for cutting-edge object detection.

Face Detection Object Detection

You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership

1 code implementation NeurIPS 2021 Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang

The lottery ticket hypothesis (LTH) emerges as a promising framework to leverage a special sparse subnetwork (i. e., winning ticket) instead of a full model for both training and inference, that can lower both costs without sacrificing the performance.

FMFCC-A: A Challenging Mandarin Dataset for Synthetic Speech Detection

1 code implementation18 Oct 2021 Zhenyu Zhang, Yewei Gu, Xiaowei Yi, Xianfeng Zhao

As increasing development of text-to-speech (TTS) and voice conversion (VC) technologies, the detection of synthetic speech has been suffered dramatically.

Speech Synthesis Synthetic Speech Detection +1

Improving Distantly-Supervised Named Entity Recognition with Self-Collaborative Denoising Learning

1 code implementation EMNLP 2021 Xinghua Zhang, Bowen Yu, Tingwen Liu, Zhenyu Zhang, Jiawei Sheng, Mengge Xue, Hongbo Xu

Distantly supervised named entity recognition (DS-NER) efficiently reduces labor costs but meanwhile intrinsically suffers from the label noise due to the strong assumption of distant supervision.

Denoising Named Entity Recognition +1

MediumVC: Any-to-any voice conversion using synthetic specific-speaker speeches as intermedium features

2 code implementations6 Oct 2021 Yewei Gu, Zhenyu Zhang, Xiaowei Yi, Xianfeng Zhao

To realize any-to-any (A2A) voice conversion (VC), most methods are to perform symmetric self-supervised reconstruction tasks (Xi to Xi), which usually results in inefficient performances due to inadequate feature decoupling, especially for unseen speakers.

Voice Conversion

DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder

no code implementations22 Sep 2021 Zhenyu Zhang, Tao Guo, Meng Chen

DialogueBERT was pre-trained with 70 million dialogues in real scenario, and then fine-tuned in three different downstream dialogue understanding tasks.

Dialogue Understanding Emotion Recognition +4

RigNet: Repetitive Image Guided Network for Depth Completion

no code implementations29 Jul 2021 Zhiqiang Yan, Kun Wang, Xiang Li, Zhenyu Zhang, Baobei Xu, Jun Li, Jian Yang

Depth completion deals with the problem of recovering dense depth maps from sparse ones, where color images are often used to facilitate this task.

Depth Completion Depth Estimation

Efficient Lottery Ticket Finding: Less Data is More

1 code implementation6 Jun 2021 Zhenyu Zhang, Xuxi Chen, Tianlong Chen, Zhangyang Wang

We observe that a high-quality winning ticket can be found with training and pruning the dense network on the very compact PrAC set, which can substantially save training iterations for the ticket finding process.

GANs Can Play Lottery Tickets Too

1 code implementation ICLR 2021 Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen

In this work, we for the first time study the existence of such trainable matching subnetworks in deep GANs.

Image-to-Image Translation

Decentralized Baseband Processing with Gaussian Message Passing Detection for Uplink Massive MU-MIMO Systems

no code implementations22 May 2021 Zhenyu Zhang, Yuanyuan Dong, Keping Long, Xiyuan Wang, Xiaoming Dai

Decentralized baseband processing (DBP) architecture, which partitions the base station antennas into multiple antenna clusters, has been recently proposed to alleviate the excessively high interconnect bandwidth, chip input/output data rates, and detection complexity for massive multi-user multiple-input multiple-output (MU-MIMO) systems.

"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization

1 code implementation16 Apr 2021 Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang

However, the BN layer is costly to calculate and is typically implemented with non-binary parameters, leaving a hurdle for the efficient implementation of BNN training.

Image Classification

Hydrogen-assisted layer-by-layer growth and robust nontrivial topology of stanene films on Bi(111)

no code implementations11 Mar 2021 Liying Zhang, Leiqiang Li, Chenxiao Zhao, Shunfang Li, Jinfeng Jia, Zhenyu Zhang, Yu Jia, Ping Cui

The atomistic growth mechanisms and nontrivial topology of stanene as presented here are also discussed in connection with recent experimental findings.

Materials Science

Robust Overfitting may be mitigated by properly learned smoothening

no code implementations ICLR 2021 Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang

A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements.

Knowledge Distillation

Document-level Relation Extraction with Dual-tier Heterogeneous Graph

no code implementations COLING 2020 Zhenyu Zhang, Bowen Yu, Xiaobo Shu, Tingwen Liu, Hengzhu Tang, Wang Yubin, Li Guo

Document-level relation extraction (RE) poses new challenges over its sentence-level counterpart since it requires an adequate comprehension of the whole document and the multi-hop reasoning ability across multiple sentences to reach the final result.

Decision Making Relation Extraction

Coarse-to-Fine Pre-training for Named Entity Recognition

1 code implementation EMNLP 2020 Mengge Xue, Bowen Yu, Zhenyu Zhang, Tingwen Liu, Yue Zhang, Bin Wang

More recently, Named Entity Recognition hasachieved great advances aided by pre-trainingapproaches such as BERT.

Named Entity Recognition NER

Cannot find the paper you are looking for? You can Submit a new open access paper.