Search Results for author: Qiang Ye

Found 13 papers, 7 papers with code

Orthogonal Recurrent Neural Networks with Scaled Cayley Transform

2 code implementations ICML 2018 Kyle Helfrich, Devin Willmott, Qiang Ye

Recurrent Neural Networks (RNNs) are designed to handle sequential data but suffer from vanishing or exploding gradients.

Complex Unitary Recurrent Neural Networks using Scaled Cayley Transform

1 code implementation9 Nov 2018 Kehelwala D. G. Maduranga, Kyle E. Helfrich, Qiang Ye

Recently, there have been several different RNN architectures that try to mitigate this issue by maintaining an orthogonal or unitary recurrent weight matrix.

Eigenvalue Normalized Recurrent Neural Networks for Short Term Memory

1 code implementation18 Nov 2019 Kyle Helfrich, Qiang Ye

Several variants of recurrent neural networks (RNNs) with orthogonal or unitary recurrent matrices have recently been developed to mitigate the vanishing/exploding gradient problem and to model long-term dependencies of sequences.

Symmetry Structured Convolutional Neural Networks

2 code implementations3 Mar 2022 Kehelwala Dewage Gayan Maduranga, Vasily Zadorozhnyy, Qiang Ye

We consider Convolutional Neural Networks (CNNs) with 2D structured features that are symmetric in the spatial dimensions.

Sequential Recommendation

On regularization for a convolutional kernel in neural networks

no code implementations12 Jun 2019 Pei-Chang Guo, Qiang Ye

Convolutional neural network is an important model in deep learning.

Two-stage short-term wind power forecasting algorithm using different feature-learning models

no code implementations31 May 2020 Jiancheng Qin, Jin Yang, Ying Chen, Qiang Ye, Hua Li

Considering the overfitting issue, we propose a new moving window-based algorithm using a validation set in the first stage to update the training data in both stages with two different moving window processes. Experiments were conducted at three wind farms, and the results demonstrate that the model with single input multiple output structure obtains better forecasting accuracy compared to existing models.

Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum

no code implementations3 Dec 2020 Bao Wang, Qiang Ye

In this paper, we propose a novel \emph{adaptive momentum} for improving DNNs training; this adaptive momentum, with no momentum related hyperparameter required, is motivated by the nonlinear conjugate gradient (NCG) method.

Adversarial Robustness

Batch Normalization Preconditioning for Neural Network Training

no code implementations2 Aug 2021 Susanna Lange, Kyle Helfrich, Qiang Ye

Batch normalization (BN) is a popular and ubiquitous method in deep learning that has been shown to decrease training time and improve generalization performance of neural networks.

AUTM Flow: Atomic Unrestricted Time Machine for Monotonic Normalizing Flows

no code implementations5 Jun 2022 Difeng Cai, Yuliang Ji, Huan He, Qiang Ye, Yuanzhe Xi

AUTM offers a versatile and efficient way to the design of normalizing flows with explicit inverse and unrestricted function classes or parameters.

Density Estimation Image Generation +1

Orthogonal Gated Recurrent Unit with Neumann-Cayley Transformation

1 code implementation12 Aug 2022 Edison Mucllari, Vasily Zadorozhnyy, Cole Pospisil, Duc Nguyen, Qiang Ye

In recent years, using orthogonal matrices has been shown to be a promising approach in improving Recurrent Neural Networks (RNNs) with training, stability, and convergence, particularly, to control gradients.

Breaking Time Invariance: Assorted-Time Normalization for RNNs

1 code implementation28 Sep 2022 Cole Pospisil, Vasily Zadorozhnyy, Qiang Ye

Methods such as Layer Normalization (LN) and Batch Normalization (BN) have proven to be effective in improving the training of Recurrent Neural Networks (RNNs).

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.