Search Results for author: Dongpo Xu

Found 9 papers, 2 papers with code

Quaternion recurrent neural network with real-time recurrent learning and maximum correntropy criterion

no code implementations22 Feb 2024 Pauline Bourigault, Dongpo Xu, Danilo P. Mandic

We develop a robust quaternion recurrent neural network (QRNN) for real-time processing of 3D and 4D data with outliers.

motion prediction

The HR-Calculus: Enabling Information Processing with Quaternion Algebra

no code implementations28 Nov 2023 Danilo P. Mandic, Sayed Pouria Talebi, Clive Cheong Took, Yili Xia, Dongpo Xu, Min Xiang, Pauline Bourigault

From their inception, quaternions and their division algebra have proven to be advantageous in modelling rotation/orientation in three-dimensional spaces and have seen use from the initial formulation of electromagnetic filed theory through to forming the basis of quantum filed theory.

Convex Quaternion Optimization for Signal Processing: Theory and Applications

no code implementations9 May 2023 Shuning Sun, Qiankun Diao, Dongpo Xu, Pauline Bourigault, Danilo P. Mandic

Convex optimization methods have been extensively used in the fields of communications and signal processing.

UAdam: Unified Adam-Type Algorithmic Framework for Non-Convex Stochastic Optimization

no code implementations9 May 2023 Yiming Jiang, Jinlan Liu, Dongpo Xu, Danilo P. Mandic

Adam-type algorithms have become a preferred choice for optimisation in the deep learning setting, however, despite success, their convergence is still not well understood.

Stochastic Optimization Vocal Bursts Type Prediction

Last-iterate convergence analysis of stochastic momentum methods for neural networks

no code implementations30 May 2022 Dongpo Xu, Jinlan Liu, Yinghua Lu, Jun Kong, Danilo Mandic

The stochastic momentum method is a commonly used acceleration technique for solving large-scale stochastic optimization problems in artificial neural networks.

Stochastic Optimization

Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent

1 code implementation12 Jun 2021 Kun Zeng, Jinlan Liu, Zhixia Jiang, Dongpo Xu

The momentum stochastic gradient descent uses the accumulated gradient as the updated direction of the current parameters, which has a faster training speed.

A decreasing scaling transition scheme from Adam to SGD

2 code implementations12 Jun 2021 Kun Zeng, Jinlan Liu, Zhixia Jiang, Dongpo Xu

Adaptive gradient algorithm (AdaGrad) and its variants, such as RMSProp, Adam, AMSGrad, etc, have been widely used in deep learning.

Quaternion Gradient and Hessian

no code implementations13 Jun 2014 Dongpo Xu, Danilo P. Mandic

The optimization of real scalar functions of quaternion variables, such as the mean square error or array output power, underpins many practical applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.