Search Results for author: Param Budhraja

Found 2 papers, 0 papers with code

Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points

no code implementations7 Dec 2022 Mayank Baranwal, Param Budhraja, Vishal Raj, Ashish R. Hota

Gradient-based first-order convex optimization algorithms find widespread applicability in a variety of domains, including machine learning tasks.

Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent Flows

no code implementations2 Dec 2021 Param Budhraja, Mayank Baranwal, Kunal Garg, Ashish Hota

We achieve this by first leveraging a continuous-time framework for designing fixed-time stable dynamical systems, and later providing a consistent discretization strategy, such that the equivalent discrete-time algorithm tracks the optimizer in a practically fixed number of iterations.

Cannot find the paper you are looking for? You can Submit a new open access paper.