no code implementations • 26 Oct 2022 • Vasily Zadorozhnyy, Qiang Ye, Kazuhito Koishida
In recent years, Generative Adversarial Networks (GANs) have produced significantly improved results in speech enhancement (SE) tasks.
Ranked #1 on
Speech Enhancement
on VoiceBank + DEMAND
1 code implementation • 28 Sep 2022 • Cole Pospisil, Vasily Zadorozhnyy, Qiang Ye
Methods such as Layer Normalization (LN) and Batch Normalization (BN) have proven to be effective in improving the training of Recurrent Neural Networks (RNNs).
1 code implementation • 12 Aug 2022 • Edison Mucllari, Vasily Zadorozhnyy, Cole Pospisil, Duc Nguyen, Qiang Ye
In recent years, using orthogonal matrices has been shown to be a promising approach in improving Recurrent Neural Networks (RNNs) with training, stability, and convergence, particularly, to control gradients.
no code implementations • 5 Jun 2022 • Difeng Cai, Yuliang Ji, Huan He, Qiang Ye, Yuanzhe Xi
AUTM offers a versatile and efficient way to the design of normalizing flows with explicit inverse and unrestricted function classes or parameters.
2 code implementations • 3 Mar 2022 • Kehelwala Dewage Gayan Maduranga, Vasily Zadorozhnyy, Qiang Ye
We consider Convolutional Neural Networks (CNNs) with 2D structured features that are symmetric in the spatial dimensions.
no code implementations • 2 Aug 2021 • Susanna Lange, Kyle Helfrich, Qiang Ye
Batch normalization (BN) is a popular and ubiquitous method in deep learning that has been shown to decrease training time and improve generalization performance of neural networks.
1 code implementation • CVPR 2021 • Vasily Zadorozhnyy, Qiang Cheng, Qiang Ye
Generative adversarial network (GAN) has become one of the most important neural network models for classical unsupervised machine learning.
Ranked #4 on
Conditional Image Generation
on CIFAR-100
no code implementations • 3 Dec 2020 • Bao Wang, Qiang Ye
In this paper, we propose a novel \emph{adaptive momentum} for improving DNNs training; this adaptive momentum, with no momentum related hyperparameter required, is motivated by the nonlinear conjugate gradient (NCG) method.
no code implementations • 31 May 2020 • Jiancheng Qin, Jin Yang, Ying Chen, Qiang Ye, Hua Li
Considering the overfitting issue, we propose a new moving window-based algorithm using a validation set in the first stage to update the training data in both stages with two different moving window processes. Experiments were conducted at three wind farms, and the results demonstrate that the model with single input multiple output structure obtains better forecasting accuracy compared to existing models.
1 code implementation • 18 Nov 2019 • Kyle Helfrich, Qiang Ye
Several variants of recurrent neural networks (RNNs) with orthogonal or unitary recurrent matrices have recently been developed to mitigate the vanishing/exploding gradient problem and to model long-term dependencies of sequences.
no code implementations • 12 Jun 2019 • Pei-Chang Guo, Qiang Ye
Convolutional neural network is an important model in deep learning.
1 code implementation • 9 Nov 2018 • Kehelwala D. G. Maduranga, Kyle E. Helfrich, Qiang Ye
Recently, there have been several different RNN architectures that try to mitigate this issue by maintaining an orthogonal or unitary recurrent weight matrix.
2 code implementations • ICML 2018 • Kyle Helfrich, Devin Willmott, Qiang Ye
Recurrent Neural Networks (RNNs) are designed to handle sequential data but suffer from vanishing or exploding gradients.