Search Results for author: Nicha Dvornek

Found 9 papers, 6 papers with code

Surrogate Gap Minimization Improves Sharpness-Aware Training

1 code implementation ICLR 2022 Juntang Zhuang, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha Dvornek, Sekhar Tatikonda, James Duncan, Ting Liu

Instead, we define a \textit{surrogate gap}, a measure equivalent to the dominant eigenvalue of Hessian at a local minimum when the radius of the neighborhood (to derive the perturbed loss) is small.

Momentum Centering and Asynchronous Update for Adaptive Gradient Methods

2 code implementations NeurIPS 2021 Juntang Zhuang, Yifan Ding, Tommy Tang, Nicha Dvornek, Sekhar Tatikonda, James S. Duncan

We demonstrate that ACProp has a convergence rate of $O(\frac{1}{\sqrt{T}})$ for the stochastic non-convex case, which matches the oracle rate and outperforms the $O(\frac{logT}{\sqrt{T}})$ rate of RMSProp and Adam.

Image Classification

Multiple-shooting adjoint method for whole-brain dynamic causal modeling

no code implementations14 Feb 2021 Juntang Zhuang, Nicha Dvornek, Sekhar Tatikonda, Xenophon Papademetris, Pamela Ventola, James Duncan

Furthermore, MSA uses the adjoint method for accurate gradient estimation in the ODE; since the adjoint method is generic, MSA is a generic method for both linear and non-linear systems, and does not require re-derivation of the algorithm as in EM.

AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients

8 code implementations NeurIPS 2020 Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar Tatikonda, Nicha Dvornek, Xenophon Papademetris, James S. Duncan

Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step.

Image Classification Language Modelling

Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE

2 code implementations ICML 2020 Juntang Zhuang, Nicha Dvornek, Xiaoxiao Li, Sekhar Tatikonda, Xenophon Papademetris, James Duncan

Neural ordinary differential equations (NODEs) have recently attracted increasing attention; however, their empirical performance on benchmark tasks (e. g. image classification) are significantly inferior to discrete-layer models.

General Classification Image Classification +2

Multi-site fMRI Analysis Using Privacy-preserving Federated Learning and Domain Adaptation: ABIDE Results

1 code implementation16 Jan 2020 Xiaoxiao Li, Yufeng Gu, Nicha Dvornek, Lawrence Staib, Pamela Ventola, James S. Duncan

However, to effectively train a high-quality deep learning model, the aggregation of a significant amount of patient information is required.

Domain Adaptation Federated Learning +2

Ordinary differential equations on graph networks

no code implementations25 Sep 2019 Juntang Zhuang, Nicha Dvornek, Xiaoxiao Li, James S. Duncan

Inspired by neural ordinary differential equation (NODE) for data in the Euclidean domain, we extend the idea of continuous-depth models to graph data, and propose graph ordinary differential equation (GODE).

Graph Classification Node Classification

ShelfNet for Fast Semantic Segmentation

6 code implementations27 Nov 2018 Juntang Zhuang, Junlin Yang, Lin Gu, Nicha Dvornek

Compared with real-time segmentation models such as BiSeNet, our model achieves higher accuracy at comparable speed on the Cityscapes Dataset, enabling the application in speed-demanding tasks such as street-scene understanding for autonomous driving.

Autonomous Driving Real-Time Semantic Segmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.