Search Results for author: Xenophon Papademetris

Found 4 papers, 2 papers with code

AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients

8 code implementations NeurIPS 2020 Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar Tatikonda, Nicha Dvornek, Xenophon Papademetris, James S. Duncan

Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step.

Image Classification Language Modelling

Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE

2 code implementations ICML 2020 Juntang Zhuang, Nicha Dvornek, Xiaoxiao Li, Sekhar Tatikonda, Xenophon Papademetris, James Duncan

Neural ordinary differential equations (NODEs) have recently attracted increasing attention; however, their empirical performance on benchmark tasks (e. g. image classification) are significantly inferior to discrete-layer models.

General Classification Image Classification +2

Multiple-shooting adjoint method for whole-brain dynamic causal modeling

no code implementations14 Feb 2021 Juntang Zhuang, Nicha Dvornek, Sekhar Tatikonda, Xenophon Papademetris, Pamela Ventola, James Duncan

Furthermore, MSA uses the adjoint method for accurate gradient estimation in the ODE; since the adjoint method is generic, MSA is a generic method for both linear and non-linear systems, and does not require re-derivation of the algorithm as in EM.

Cannot find the paper you are looking for? You can Submit a new open access paper.