Search Results for author: Navid Azizan

Found 14 papers, 5 papers with code

One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares

no code implementations28 Jul 2022 Youngjae Min, Kwangjun Ahn, Navid Azizan

While deep neural networks are capable of achieving state-of-the-art performance in various domains, their training typically requires iterating for many passes over the dataset.

Uncertainty in Contrastive Learning: On the Predictability of Downstream Performance

no code implementations19 Jul 2022 Shervin Ardeshir, Navid Azizan

In this work, we study whether the uncertainty of such a representation can be quantified for a single datapoint in a meaningful way.

Contrastive Learning Decision Making

Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently

no code implementations25 May 2022 Haoyuan Sun, Kwangjun Ahn, Christos Thrampoulidis, Navid Azizan

Driven by the empirical success and wide use of deep neural networks, understanding the generalization performance of overparameterized models has become an increasingly popular question.

Control-oriented meta-learning

1 code implementation14 Apr 2022 Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone

Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.

Meta-Learning

Online Learning for Traffic Routing under Unknown Preferences

1 code implementation31 Mar 2022 Devansh Jalota, Karthik Gopalakrishnan, Navid Azizan, Ramesh Johari, Marco Pavone

at each period, we show that our approach obtains an expected regret and road capacity violation of $O(\sqrt{T})$, where $T$ is the number of periods over which tolls are updated.

online learning

A Unified View of SDP-based Neural Network Verification through Completely Positive Programming

no code implementations6 Mar 2022 Robin Brown, Edward Schmerling, Navid Azizan, Marco Pavone

Verifying that input-output relationships of a neural network conform to prescribed operational specifications is a key enabler towards deploying these networks in safety-critical applications.

Explicit Regularization via Regularizer Mirror Descent

no code implementations22 Feb 2022 Navid Azizan, Sahin Lale, Babak Hassibi

RMD starts with a standard cost which is the sum of the training loss and a convex regularizer of the weights.

Continual Learning

Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems

1 code implementation7 Mar 2021 Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone

Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.

Meta-Learning

Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks

1 code implementation24 Feb 2021 Apoorva Sharma, Navid Azizan, Marco Pavone

In order to safely deploy Deep Neural Networks (DNNs) within the perception pipelines of real-time decision making systems, there is a need for safeguards that can detect out-of-training-distribution (OoD) inputs both efficiently and accurately.

Decision Making OOD Detection +1

Orthogonal Gradient Descent for Continual Learning

no code implementations15 Oct 2019 Mehrdad Farajtabar, Navid Azizan, Alex Mott, Ang Li

In this paper, we propose to address this issue from a parameter space perspective and study an approach to restrict the direction of the gradient updates to avoid forgetting previously-learned data.

Continual Learning

Stochastic Mirror Descent on Overparameterized Nonlinear Models

no code implementations25 Sep 2019 Navid Azizan, Sahin Lale, Babak Hassibi

On the theory side, we show that in the overparameterized nonlinear setting, if the initialization is close enough to the manifold of global optima, SMD with sufficiently small step size converges to a global minimum that is approximately the closest global minimum in Bregman divergence, thus attaining approximate implicit regularization.

Stochastic Mirror Descent on Overparameterized Nonlinear Models: Convergence, Implicit Regularization, and Generalization

1 code implementation10 Jun 2019 Navid Azizan, Sahin Lale, Babak Hassibi

Most modern learning problems are highly overparameterized, meaning that there are many more parameters than the number of training data points, and as a result, the training loss may have infinitely many global minima (parameter vectors that perfectly interpolate the training data).

A Stochastic Interpretation of Stochastic Mirror Descent: Risk-Sensitive Optimality

no code implementations3 Apr 2019 Navid Azizan, Babak Hassibi

Stochastic mirror descent (SMD) is a fairly new family of algorithms that has recently found a wide range of applications in optimization, machine learning, and control.

Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization

no code implementations ICLR 2019 Navid Azizan, Babak Hassibi

In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models.

Cannot find the paper you are looking for? You can Submit a new open access paper.