no code implementations • 1 Oct 2024 • Sunbochen Tang, Themistoklis Sapsis, Navid Azizan
More specifically, by leveraging control-theoretic ideas, we derive algebraic conditions based on the learned energy-like function that ensure asymptotic convergence to an invariant level set.
no code implementations • 29 Jul 2024 • Sunbochen Tang, Haoyuan Sun, Navid Azizan
In this paper, we propose a novel method that combines meta-learning and adaptation laws based on mirror descent, a popular generalization of gradient descent, which takes advantage of the potentially non-Euclidean geometry of the parameter space.
no code implementations • 29 May 2024 • Kaveh Alimohammadi, Hao Wang, Ojas Gulati, Akash Srivastava, Navid Azizan
Existing differentially private (DP) synthetic data generation mechanisms typically assume a single-source table.
no code implementations • 15 Oct 2023 • Haoyuan Sun, Navid Azizan, Akash Srivastava, Hao Wang
When machine learning models are trained on synthetic data and then deployed on real data, there is often a performance drop due to the distribution shift between synthetic and real data.
no code implementations • 24 Jun 2023 • Haoyuan Sun, Khashayar Gatmiry, Kwangjun Ahn, Navid Azizan
However, the implicit regularization of different algorithms are confined to either a specific geometry or a particular class of learning problems, indicating a gap in a general approach for controlling the implicit regularization.
1 code implementation • 31 May 2023 • Young-Jin Park, Hao Wang, Shervin Ardeshir, Navid Azizan
Quantifying the reliability of these representations is crucial, as many downstream models rely on them as input for their own tasks.
1 code implementation • 25 May 2023 • Benjamin Wright, Youngjae Min, Jeremy Bernstein, Navid Azizan
This paper proposes a memory-efficient solution to catastrophic forgetting, improving upon an established algorithm known as orthogonal gradient descent (OGD).
no code implementations • 20 Apr 2023 • Boris Velasevic, Rohit Parasnis, Christopher G. Brinton, Navid Azizan
We consider the problem of solving a large-scale system of linear equations in a distributed or federated manner by a taskmaster and a set of machines, each possessing a subset of the equations.
1 code implementation • 11 Apr 2023 • Jeremy Bernstein, Chris Mingard, Kevin Huang, Navid Azizan, Yisong Yue
Automatic gradient descent trains both fully-connected and convolutional networks out-of-the-box and at ImageNet scale.
no code implementations • 21 Mar 2023 • Devansh Jalota, Haoyuan Sun, Navid Azizan
In this incomplete information setting, we consider the online learning problem of learning equilibrium prices over time while jointly optimizing three performance metrics -- unmet demand, cost regret, and payment regret -- pertinent in the context of equilibrium pricing over a horizon of $T$ periods.
no code implementations • 6 Mar 2023 • Youngjae Min, Spencer M. Richards, Navid Azizan
Recent advances in learning-based control leverage deep function approximators, such as neural networks, to model the evolution of controlled dynamical systems over time.
1 code implementation • 6 Feb 2023 • Spencer M. Richards, Jean-Jacques Slotine, Navid Azizan, Marco Pavone
Even for known nonlinear dynamical systems, feedback controller synthesis is a difficult problem that often requires leveraging the particular structure of the dynamics to induce a stable closed-loop system.
1 code implementation • 4 Oct 2022 • Cesar Almecija, Apoorva Sharma, Navid Azizan
In this work, we present UnLiMiTD (uncertainty-aware meta-learning for multimodal task distributions), a novel method for meta-learning that (1) makes probabilistic predictions on in-distribution tasks efficiently, (2) is capable of detecting OoD context data at test time, and (3) performs on heterogeneous, multimodal task distributions.
no code implementations • 28 Jul 2022 • Youngjae Min, Kwangjun Ahn, Navid Azizan
While deep neural networks are capable of achieving state-of-the-art performance in various domains, their training typically requires iterating for many passes over the dataset.
no code implementations • 19 Jul 2022 • Shervin Ardeshir, Navid Azizan
In this work, we study whether the uncertainty of such a representation can be quantified for a single datapoint in a meaningful way.
no code implementations • 25 May 2022 • Haoyuan Sun, Kwangjun Ahn, Christos Thrampoulidis, Navid Azizan
Driven by the empirical success and wide use of deep neural networks, understanding the generalization performance of overparameterized models has become an increasingly popular question.
1 code implementation • 14 Apr 2022 • Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone
Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.
1 code implementation • 31 Mar 2022 • Devansh Jalota, Karthik Gopalakrishnan, Navid Azizan, Ramesh Johari, Marco Pavone
at each period, we show that our approach obtains an expected regret and road capacity violation of $O(\sqrt{T})$, where $T$ is the number of periods over which tolls are updated.
no code implementations • 6 Mar 2022 • Robin Brown, Edward Schmerling, Navid Azizan, Marco Pavone
Verifying that input-output relationships of a neural network conform to prescribed operational specifications is a key enabler towards deploying these networks in safety-critical applications.
no code implementations • 22 Feb 2022 • Navid Azizan, Sahin Lale, Babak Hassibi
RMD starts with a standard cost which is the sum of the training loss and a convex regularizer of the weights.
1 code implementation • 7 Mar 2021 • Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone
Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.
2 code implementations • 24 Feb 2021 • Apoorva Sharma, Navid Azizan, Marco Pavone
In order to safely deploy Deep Neural Networks (DNNs) within the perception pipelines of real-time decision making systems, there is a need for safeguards that can detect out-of-training-distribution (OoD) inputs both efficiently and accurately.
no code implementations • 15 Oct 2019 • Mehrdad Farajtabar, Navid Azizan, Alex Mott, Ang Li
In this paper, we propose to address this issue from a parameter space perspective and study an approach to restrict the direction of the gradient updates to avoid forgetting previously-learned data.
no code implementations • 25 Sep 2019 • Navid Azizan, Sahin Lale, Babak Hassibi
On the theory side, we show that in the overparameterized nonlinear setting, if the initialization is close enough to the manifold of global optima, SMD with sufficiently small step size converges to a global minimum that is approximately the closest global minimum in Bregman divergence, thus attaining approximate implicit regularization.
1 code implementation • 10 Jun 2019 • Navid Azizan, Sahin Lale, Babak Hassibi
Most modern learning problems are highly overparameterized, meaning that there are many more parameters than the number of training data points, and as a result, the training loss may have infinitely many global minima (parameter vectors that perfectly interpolate the training data).
no code implementations • 3 Apr 2019 • Navid Azizan, Babak Hassibi
Stochastic mirror descent (SMD) is a fairly new family of algorithms that has recently found a wide range of applications in optimization, machine learning, and control.
no code implementations • ICLR 2019 • Navid Azizan, Babak Hassibi
In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models.