Search Results for author: Navid Azizan

Found 27 papers, 10 papers with code

Learning Chaotic Dynamics with Embedded Dissipativity

no code implementations1 Oct 2024 Sunbochen Tang, Themistoklis Sapsis, Navid Azizan

More specifically, by leveraging control-theoretic ideas, we derive algebraic conditions based on the learned energy-like function that ensure asymptotic convergence to an invariant level set.

Meta-Learning for Adaptive Control with Automated Mirror Descent

no code implementations29 Jul 2024 Sunbochen Tang, Haoyuan Sun, Navid Azizan

In this paper, we propose a novel method that combines meta-learning and adaptation laws based on mirror descent, a popular generalization of gradient descent, which takes advantage of the potentially non-Euclidean geometry of the parameter space.

Meta-Learning

Adapting Differentially Private Synthetic Data to Relational Databases

no code implementations29 May 2024 Kaveh Alimohammadi, Hao Wang, Ojas Gulati, Akash Srivastava, Navid Azizan

Existing differentially private (DP) synthetic data generation mechanisms typically assume a single-source table.

Synthetic Data Generation

Private Synthetic Data Meets Ensemble Learning

no code implementations15 Oct 2023 Haoyuan Sun, Navid Azizan, Akash Srivastava, Hao Wang

When machine learning models are trained on synthetic data and then deployed on real data, there is often a performance drop due to the distribution shift between synthetic and real data.

Diversity Ensemble Learning

A Unified Approach to Controlling Implicit Regularization via Mirror Descent

no code implementations24 Jun 2023 Haoyuan Sun, Khashayar Gatmiry, Kwangjun Ahn, Navid Azizan

However, the implicit regularization of different algorithms are confined to either a specific geometry or a particular class of learning problems, indicating a gap in a general approach for controlling the implicit regularization.

Classification regression

Quantifying Representation Reliability in Self-Supervised Learning Models

1 code implementation31 May 2023 Young-Jin Park, Hao Wang, Shervin Ardeshir, Navid Azizan

Quantifying the reliability of these representations is crucial, as many downstream models rely on them as input for their own tasks.

Self-Supervised Learning Uncertainty Quantification

SketchOGD: Memory-Efficient Continual Learning

1 code implementation25 May 2023 Benjamin Wright, Youngjae Min, Jeremy Bernstein, Navid Azizan

This paper proposes a memory-efficient solution to catastrophic forgetting, improving upon an established algorithm known as orthogonal gradient descent (OGD).

Continual Learning

On the Effects of Data Heterogeneity on the Convergence Rates of Distributed Linear System Solvers

no code implementations20 Apr 2023 Boris Velasevic, Rohit Parasnis, Christopher G. Brinton, Navid Azizan

We consider the problem of solving a large-scale system of linear equations in a distributed or federated manner by a taskmaster and a set of machines, each possessing a subset of the equations.

Automatic Gradient Descent: Deep Learning without Hyperparameters

1 code implementation11 Apr 2023 Jeremy Bernstein, Chris Mingard, Kevin Huang, Navid Azizan, Yisong Yue

Automatic gradient descent trains both fully-connected and convolutional networks out-of-the-box and at ImageNet scale.

Second-order methods

Online Learning for Equilibrium Pricing in Markets under Incomplete Information

no code implementations21 Mar 2023 Devansh Jalota, Haoyuan Sun, Navid Azizan

In this incomplete information setting, we consider the online learning problem of learning equilibrium prices over time while jointly optimizing three performance metrics -- unmet demand, cost regret, and payment regret -- pertinent in the context of equilibrium pricing over a horizon of $T$ periods.

Data-Driven Control with Inherent Lyapunov Stability

no code implementations6 Mar 2023 Youngjae Min, Spencer M. Richards, Navid Azizan

Recent advances in learning-based control leverage deep function approximators, such as neural networks, to model the evolution of controlled dynamical systems over time.

Learning Control-Oriented Dynamical Structure from Data

1 code implementation6 Feb 2023 Spencer M. Richards, Jean-Jacques Slotine, Navid Azizan, Marco Pavone

Even for known nonlinear dynamical systems, feedback controller synthesis is a difficult problem that often requires leveraging the particular structure of the dynamics to induce a stable closed-loop system.

Uncertainty-Aware Meta-Learning for Multimodal Task Distributions

1 code implementation4 Oct 2022 Cesar Almecija, Apoorva Sharma, Navid Azizan

In this work, we present UnLiMiTD (uncertainty-aware meta-learning for multimodal task distributions), a novel method for meta-learning that (1) makes probabilistic predictions on in-distribution tasks efficiently, (2) is capable of detecting OoD context data at test time, and (3) performs on heterogeneous, multimodal task distributions.

Bayesian Inference Few-Shot Learning

One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares

no code implementations28 Jul 2022 Youngjae Min, Kwangjun Ahn, Navid Azizan

While deep neural networks are capable of achieving state-of-the-art performance in various domains, their training typically requires iterating for many passes over the dataset.

Uncertainty in Contrastive Learning: On the Predictability of Downstream Performance

no code implementations19 Jul 2022 Shervin Ardeshir, Navid Azizan

In this work, we study whether the uncertainty of such a representation can be quantified for a single datapoint in a meaningful way.

Contrastive Learning Decision Making

Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently

no code implementations25 May 2022 Haoyuan Sun, Kwangjun Ahn, Christos Thrampoulidis, Navid Azizan

Driven by the empirical success and wide use of deep neural networks, understanding the generalization performance of overparameterized models has become an increasingly popular question.

Open-Ended Question Answering

Control-oriented meta-learning

1 code implementation14 Apr 2022 Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone

Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.

Meta-Learning regression

Online Learning for Traffic Routing under Unknown Preferences

1 code implementation31 Mar 2022 Devansh Jalota, Karthik Gopalakrishnan, Navid Azizan, Ramesh Johari, Marco Pavone

at each period, we show that our approach obtains an expected regret and road capacity violation of $O(\sqrt{T})$, where $T$ is the number of periods over which tolls are updated.

A Unified View of SDP-based Neural Network Verification through Completely Positive Programming

no code implementations6 Mar 2022 Robin Brown, Edward Schmerling, Navid Azizan, Marco Pavone

Verifying that input-output relationships of a neural network conform to prescribed operational specifications is a key enabler towards deploying these networks in safety-critical applications.

Explicit Regularization via Regularizer Mirror Descent

no code implementations22 Feb 2022 Navid Azizan, Sahin Lale, Babak Hassibi

RMD starts with a standard cost which is the sum of the training loss and a convex regularizer of the weights.

Continual Learning

Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems

1 code implementation7 Mar 2021 Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone

Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.

Meta-Learning regression

Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks

2 code implementations24 Feb 2021 Apoorva Sharma, Navid Azizan, Marco Pavone

In order to safely deploy Deep Neural Networks (DNNs) within the perception pipelines of real-time decision making systems, there is a need for safeguards that can detect out-of-training-distribution (OoD) inputs both efficiently and accurately.

Decision Making Out-of-Distribution Detection +1

Orthogonal Gradient Descent for Continual Learning

no code implementations15 Oct 2019 Mehrdad Farajtabar, Navid Azizan, Alex Mott, Ang Li

In this paper, we propose to address this issue from a parameter space perspective and study an approach to restrict the direction of the gradient updates to avoid forgetting previously-learned data.

Continual Learning

Stochastic Mirror Descent on Overparameterized Nonlinear Models

no code implementations25 Sep 2019 Navid Azizan, Sahin Lale, Babak Hassibi

On the theory side, we show that in the overparameterized nonlinear setting, if the initialization is close enough to the manifold of global optima, SMD with sufficiently small step size converges to a global minimum that is approximately the closest global minimum in Bregman divergence, thus attaining approximate implicit regularization.

Stochastic Mirror Descent on Overparameterized Nonlinear Models: Convergence, Implicit Regularization, and Generalization

1 code implementation10 Jun 2019 Navid Azizan, Sahin Lale, Babak Hassibi

Most modern learning problems are highly overparameterized, meaning that there are many more parameters than the number of training data points, and as a result, the training loss may have infinitely many global minima (parameter vectors that perfectly interpolate the training data).

A Stochastic Interpretation of Stochastic Mirror Descent: Risk-Sensitive Optimality

no code implementations3 Apr 2019 Navid Azizan, Babak Hassibi

Stochastic mirror descent (SMD) is a fairly new family of algorithms that has recently found a wide range of applications in optimization, machine learning, and control.

Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization

no code implementations ICLR 2019 Navid Azizan, Babak Hassibi

In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models.

Cannot find the paper you are looking for? You can Submit a new open access paper.