Search Results for author: Cong Zhou

Found 11 papers, 1 papers with code

High Quality Audio Coding with MDCTNet

no code implementations8 Dec 2022 Grant Davidson, Mark Vinton, Per Ekstrand, Cong Zhou, Lars Villemoes, Lie Lu

We propose a neural audio generative model, MDCTNet, operating in the perceptually weighted domain of an adaptive modified discrete cosine transform (MDCT).

Vocal Bursts Intensity Prediction

Effidit: Your AI Writing Assistant

no code implementations3 Aug 2022 Shuming Shi, Enbo Zhao, Duyu Tang, Yan Wang, Piji Li, Wei Bi, Haiyun Jiang, Guoping Huang, Leyang Cui, Xinting Huang, Cong Zhou, Yong Dai, Dongyang Ma

In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME).

Keywords to Sentences Retrieval +3

One Model, Multiple Modalities: A Sparsely Activated Approach for Text, Sound, Image, Video and Code

no code implementations12 May 2022 Yong Dai, Duyu Tang, Liangxin Liu, Minghuan Tan, Cong Zhou, Jingquan Wang, Zhangyin Feng, Fan Zhang, Xueyu Hu, Shuming Shi

Moreover, our model supports self-supervised pretraining with the same sparsely activated way, resulting in better initialized parameters for different modalities.

Image Retrieval Retrieval

Pretraining Chinese BERT for Detecting Word Insertion and Deletion Errors

no code implementations26 Apr 2022 Cong Zhou, Yong Dai, Duyu Tang, Enbo Zhao, Zhangyin Feng, Li Kuang, Shuming Shi

We achieve this by introducing a special token \texttt{[null]}, the prediction of which stands for the non-existence of a word.

Language Modelling Masked Language Modeling +1

Voice Conversion with Conditional SampleRNN

no code implementations24 Aug 2018 Cong Zhou, Michael Horgan, Vivek Kumar, Cristina Vasco, Dan Darcy

Here we present a novel approach to conditioning the SampleRNN generative model for voice conversion (VC).

Voice Conversion

Cannot find the paper you are looking for? You can Submit a new open access paper.