Search Results for author: Kamyar Azizzadenesheli

Found 70 papers, 34 papers with code

Universal Functional Regression with Neural Operator Flows

no code implementations3 Apr 2024 Yaozhong Shi, Angela F. Gao, Zachary E. Ross, Kamyar Azizzadenesheli

We empirically study the performance of OpFlow on regression and generation tasks with data generated from Gaussian processes with known posterior forms and non-Gaussian processes, as well as real-world earthquake seismograms with an unknown closed-form distribution.

Gaussian Processes regression +1

Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs

1 code implementation19 Mar 2024 Md Ashiqur Rahman, Robert Joseph George, Mogab Elleithy, Daniel Leibovici, Zongyi Li, Boris Bonev, Colin White, Julius Berner, Raymond A. Yeh, Jean Kossaifi, Kamyar Azizzadenesheli, Anima Anandkumar

On complex downstream tasks with limited data, such as fluid flow simulations and fluid-structure interactions, we found CoDA-NO to outperform existing methods on the few-shot learning task by over $36\%$.

Few-Shot Learning Self-Supervised Learning

Neural Operators with Localized Integral and Differential Kernels

no code implementations26 Feb 2024 Miguel Liu-Schiaffini, Julius Berner, Boris Bonev, Thorsten Kurth, Kamyar Azizzadenesheli, Anima Anandkumar

In this work, we present a principled approach to operator learning that can capture local features under two frameworks by learning differential operators and integral operators with locally supported kernels.

Operator learning

Calibrated Uncertainty Quantification for Operator Learning via Conformal Prediction

no code implementations2 Feb 2024 Ziqi Ma, Kamyar Azizzadenesheli, Anima Anandkumar

Operator learning has been increasingly adopted in scientific and engineering applications, many of which require calibrated uncertainty quantification.

Conformal Prediction Operator learning +1

Equivariant Graph Neural Operator for Modeling 3D Dynamics

no code implementations19 Jan 2024 Minkai Xu, Jiaqi Han, Aaron Lou, Jean Kossaifi, Arvind Ramanathan, Kamyar Azizzadenesheli, Jure Leskovec, Stefano Ermon, Anima Anandkumar

Modeling the complex three-dimensional (3D) dynamics of relational systems is an important problem in the natural sciences, with applications ranging from molecular simulations to particle mechanics.

Operator learning

Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs

no code implementations29 Sep 2023 Jean Kossaifi, Nikola Kovachki, Kamyar Azizzadenesheli, Anima Anandkumar

Our contributions are threefold: i) we enable parallelization over input samples with a novel multi-grid-based domain decomposition, ii) we represent the parameters of the model in a high-order latent subspace of the Fourier domain, through a global tensor factorization, resulting in an extreme reduction in the number of parameters and improved generalization, and iii) we propose architectural improvements to the backbone FNO.

Operator learning

Neural Operators for Accelerating Scientific Simulations and Design

no code implementations27 Sep 2023 Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, Anima Anandkumar

Scientific discovery and engineering design are currently limited by the time and cost of physical experiments, selected mostly through trial-and-error and intuition that require deep domain expertise.

Super-Resolution Weather Forecasting

Broadband Ground Motion Synthesis via Generative Adversarial Neural Operators: Development and Validation

1 code implementation7 Sep 2023 Yaozhong Shi, Grigorios Lavrentiadis, Domniki Asimaki, Zachary E. Ross, Kamyar Azizzadenesheli

Lastly, cGM-GANO produces similar median scaling to traditional GMMs for frequencies greater than 1Hz for both PSA and EAS but underestimates the aleatory variability of EAS.

Motion Synthesis

Tipping Point Forecasting in Non-Stationary Dynamics on Function Spaces

no code implementations17 Aug 2023 Miguel Liu-Schiaffini, Clare E. Singer, Nikola Kovachki, Tapio Schneider, Kamyar Azizzadenesheli, Anima Anandkumar

Tipping points are abrupt, drastic, and often irreversible changes in the evolution of non-stationary and chaotic dynamical systems.

Conformal Prediction

Speeding up Fourier Neural Operators via Mixed Precision

1 code implementation27 Jul 2023 Colin White, Renbo Tu, Jean Kossaifi, Gennady Pekhimenko, Kamyar Azizzadenesheli, Anima Anandkumar

In this work, we (i) profile memory and runtime for FNO with full and mixed-precision training, (ii) conduct a study on the numerical stability of mixed-precision training of FNO, and (iii) devise a training routine which substantially decreases training time and memory usage (up to 34%), with little or no reduction in accuracy, on the Navier-Stokes and Darcy flow equations.

Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo

1 code implementation29 May 2023 Haque Ishfaq, Qingfeng Lan, Pan Xu, A. Rupam Mahmood, Doina Precup, Anima Anandkumar, Kamyar Azizzadenesheli

One of the key shortcomings of existing Thompson sampling algorithms is the need to perform a Gaussian approximation of the posterior distribution, which is not a good surrogate in most practical settings.

Efficient Exploration reinforcement-learning +2

Score-based Diffusion Models in Function Space

no code implementations14 Feb 2023 Jae Hyun Lim, Nikola B. Kovachki, Ricardo Baptista, Christopher Beckham, Kamyar Azizzadenesheli, Jean Kossaifi, Vikram Voleti, Jiaming Song, Karsten Kreis, Jan Kautz, Christopher Pal, Arash Vahdat, Anima Anandkumar

They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate samples by denoising.

Denoising

PaCMO: Partner Dependent Human Motion Generation in Dyadic Human Activity using Neural Operators

no code implementations25 Nov 2022 Md Ashiqur Rahman, Jasorsi Ghosh, Hrishikesh Viswanath, Kamyar Azizzadenesheli, Aniket Bera

In contrast to the concurrent works, which mainly focus on generating the motion of a single actor from the textual description, we generate the motion of one of the actors from the motion of the other participating actor in the action.

Off-Policy Risk Assessment in Markov Decision Processes

no code implementations21 Sep 2022 Audrey Huang, Liu Leqi, Zachary Chase Lipton, Kamyar Azizzadenesheli

To mitigate these problems, we incorporate model-based estimation to develop the first doubly robust (DR) estimator for the CDF of returns in MDPs.

Multi-Armed Bandits

Compactly Restrictable Metric Policy Optimization Problems

no code implementations12 Jul 2022 Victor D. Dorobantu, Kamyar Azizzadenesheli, Yisong Yue

We study policy optimization problems for deterministic Markov decision processes (MDPs) with metric state and action spaces, which we refer to as Metric Policy Optimization Problems (MPOPs).

Continuous Control

Supervised Learning with General Risk Functionals

no code implementations27 Jun 2022 Liu Leqi, Audrey Huang, Zachary C. Lipton, Kamyar Azizzadenesheli

Standard uniform convergence results bound the generalization gap of the expected loss over a hypothesis class.

Langevin Monte Carlo for Contextual Bandits

1 code implementation22 Jun 2022 Pan Xu, Hongkai Zheng, Eric Mazumdar, Kamyar Azizzadenesheli, Anima Anandkumar

Existing Thompson sampling-based algorithms need to construct a Laplace approximation (i. e., a Gaussian distribution) of the posterior distribution, which is inefficient to sample in high dimensional applications for general covariance matrices.

Multi-Armed Bandits Thompson Sampling

Thompson Sampling Achieves $\tilde O(\sqrt{T})$ Regret in Linear Quadratic Control

no code implementations17 Jun 2022 Taylan Kargin, Sahin Lale, Kamyar Azizzadenesheli, Anima Anandkumar, Babak Hassibi

By carefully prescribing an early exploration strategy and a policy update rule, we show that TS achieves order-optimal regret in adaptive control of multidimensional stabilizable LQRs.

Decision Making Decision Making Under Uncertainty +1

KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed Stability in Nonlinear Dynamical Systems

no code implementations3 Jun 2022 Sahin Lale, Yuanyuan Shi, Guannan Qu, Kamyar Azizzadenesheli, Adam Wierman, Anima Anandkumar

However, current reinforcement learning (RL) methods lack stabilization guarantees, which limits their applicability for the control of safety-critical systems.

reinforcement-learning Reinforcement Learning (RL)

Functional Linear Regression of Cumulative Distribution Functions

1 code implementation28 May 2022 Qian Zhang, Anuran Makur, Kamyar Azizzadenesheli

In particular, given $n$ samples with $d$ basis functions, we show estimation error upper bounds of $\widetilde O(\sqrt{d/n})$ for fixed design, random design, and adversarial context cases.

Decision Making regression

Competitive Gradient Optimization

1 code implementation27 May 2022 Abhijeet Vyas, Kamyar Azizzadenesheli

We provide a rate of convergence to stationary points and further propose a generalized class of $\alpha$-coherent function for which we provide convergence analysis.

Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds

1 code implementation13 May 2022 Michael O'Connell, Guanya Shi, Xichen Shi, Kamyar Azizzadenesheli, Anima Anandkumar, Yisong Yue, Soon-Jo Chung

Last, our control design extrapolates to unseen wind conditions, is shown to be effective for outdoor flights with only onboard sensors, and can transfer across drones with minimal performance degradation.

Meta-Learning

Generative Adversarial Neural Operators

2 code implementations6 May 2022 Md Ashiqur Rahman, Manuel A. Florez, Anima Anandkumar, Zachary E. Ross, Kamyar Azizzadenesheli

The inputs to the generator are samples of functions from a user-specified probability measure, e. g., Gaussian random field (GRF), and the generator outputs are synthetic data functions.

Hyperparameter Optimization

U-NO: U-shaped Neural Operators

1 code implementation23 Apr 2022 Md Ashiqur Rahman, Zachary E. Ross, Kamyar Azizzadenesheli

We show that U-NO results in an average of 26% and 44% prediction improvement on Darcy's flow and turbulent Navier-Stokes equations, respectively, over the state of the art.

Operator learning

U-FNO -- An enhanced Fourier neural operator-based deep-learning model for multiphase flow

1 code implementation3 Sep 2021 Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, Sally M. Benson

Here we present U-FNO, a novel neural network architecture for solving multiphase flow problems with superior accuracy, speed, and data efficiency.

Decision Making

Finite-time System Identification and Adaptive Control in Autoregressive Exogenous Systems

no code implementations26 Aug 2021 Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar

Using these guarantees, we design adaptive control algorithms for unknown ARX systems with arbitrary strongly convex or convex quadratic regulating costs.

Neural Operator: Learning Maps Between Function Spaces

1 code implementation19 Aug 2021 Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar

The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets.

Operator learning

Seismic wave propagation and inversion with Neural Operators

no code implementations11 Aug 2021 Yan Yang, Angela F. Gao, Jorge C. Castellanos, Zachary E. Ross, Kamyar Azizzadenesheli, Robert W. Clayton

We develop a scheme to train Neural Operators on an ensemble of simulations performed with random velocity models and source locations.

Computational Efficiency

Learning Dissipative Dynamics in Chaotic Systems

2 code implementations13 Jun 2021 Zongyi Li, Miguel Liu-Schiaffini, Nikola Kovachki, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar

Chaotic systems are notoriously challenging to predict because of their sensitivity to perturbations and errors due to time stepping.

Meta-Adaptive Nonlinear Control: Theory and Algorithms

1 code implementation NeurIPS 2021 Guanya Shi, Kamyar Azizzadenesheli, Michael O'Connell, Soon-Jo Chung, Yisong Yue

We provide instantiations of our approach under varying conditions, leading to the first non-asymptotic end-to-end convergence guarantee for multi-task nonlinear control.

Multi-Task Learning Representation Learning

Joint Stabilization and Regret Minimization through Switching in Over-Actuated Systems (extended version)

no code implementations31 May 2021 Jafar Abbaszadeh Chekan, Kamyar Azizzadenesheli, Cedric Langbort

Adaptively controlling and minimizing regret in unknown dynamical systems while controlling the growth of the system state is crucial in real-world applications.

On the Convergence and Optimality of Policy Gradient for Markov Coherent Risk

no code implementations4 Mar 2021 Audrey Huang, Liu Leqi, Zachary C. Lipton, Kamyar Azizzadenesheli

Because optimizing the coherent risk is difficult in Markov decision processes, recent work tends to focus on the Markov coherent risk (MCR), a time-consistent surrogate.

Multi-Agent Multi-Armed Bandits with Limited Communication

no code implementations10 Feb 2021 Mridul Agarwal, Vaneet Aggarwal, Kamyar Azizzadenesheli

With our algorithm, LCC-UCB, each agent enjoys a regret of $\tilde{O}\left(\sqrt{({K/N}+ N)T}\right)$, communicates for $O(\log T)$ steps and broadcasts $O(\log K)$ bits in each communication step.

Multi-Armed Bandits

Deep Bayesian Quadrature Policy Optimization

1 code implementation28 Jun 2020 Akella Ravi Tej, Kamyar Azizzadenesheli, Mohammad Ghavamzadeh, Anima Anandkumar, Yisong Yue

On the other hand, more sample efficient alternatives like Bayesian quadrature methods have received little attention due to their high computational complexity.

Continuous Control Policy Gradient Methods

Competitive Policy Optimization

4 code implementations18 Jun 2020 Manish Prajapat, Kamyar Azizzadenesheli, Alexander Liniger, Yisong Yue, Anima Anandkumar

A core challenge in policy optimization in competitive Markov decision processes is the design of efficient optimization methods with desirable convergence and stability properties.

Policy Gradient Methods

Multipole Graph Neural Operator for Parametric Partial Differential Equations

4 code implementations NeurIPS 2020 Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar

One of the main challenges in using deep learning-based methods for simulating physical systems and solving partial differential equations (PDEs) is formulating physics-based data in the desired structure for neural networks.

MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework

1 code implementation1 May 2020 Chiyu Max Jiang, Soheil Esmaeilzadeh, Kamyar Azizzadenesheli, Karthik Kashinath, Mustafa Mustafa, Hamdi A. Tchelepi, Philip Marcus, Prabhat, Anima Anandkumar

We propose MeshfreeFlowNet, a novel deep learning-based super-resolution framework to generate continuous (grid-free) spatio-temporal solutions from the low-resolution inputs.

Super-Resolution

EikoNet: Solving the Eikonal equation with Deep Neural Networks

1 code implementation25 Mar 2020 Jonathan D. Smith, Kamyar Azizzadenesheli, Zachary E. Ross

Here, we propose EikoNet, a deep learning approach to solving the Eikonal equation, which characterizes the first-arrival-time field in heterogeneous 3D velocity structures.

Adaptive Control and Regret Minimization in Linear Quadratic Gaussian (LQG) Setting

no code implementations12 Mar 2020 Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar

We study the problem of adaptive control in partially observable linear quadratic Gaussian control systems, where the model dynamics are unknown a priori.

Neural Operator: Graph Kernel Network for Partial Differential Equations

6 code implementations ICLR Workshop DeepDiffEq 2019 Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar

The classical development of neural networks has been primarily for mappings between a finite-dimensional Euclidean space and a set of classes, or between two finite-dimensional Euclidean spaces.

Regret Minimization in Partially Observable Linear Quadratic Control

no code implementations31 Jan 2020 Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar

We propose a novel way to decompose the regret and provide an end-to-end sublinear regret upper bound for partially observable linear quadratic control.

Directivity Modes of Earthquake Populations with Unsupervised Learning

no code implementations30 Jun 2019 Zachary E. Ross, Daniel T. Trugman, Kamyar Azizzadenesheli, Anima Anandkumar

A seismic spectral decomposition technique is used to first produce relative measurements of radiated energy for earthquakes in a spatially-compact cluster.

Learning Causal State Representations of Partially Observable Environments

no code implementations25 Jun 2019 Amy Zhang, Zachary C. Lipton, Luis Pineda, Kamyar Azizzadenesheli, Anima Anandkumar, Laurent Itti, Joelle Pineau, Tommaso Furlanello

In this paper, we propose an algorithm to approximate causal states, which are the coarsest partition of the joint history of actions and observations in partially-observable Markov decision processes (POMDP).

Causal Inference

Regularized Learning for Domain Adaptation under Label Shifts

2 code implementations ICLR 2019 Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, Animashree Anandkumar

We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only depends on the complexity of the function class.

Domain Adaptation

Stochastic Linear Bandits with Hidden Low Rank Structure

no code implementations28 Jan 2019 Sahin Lale, Kamyar Azizzadenesheli, Anima Anandkumar, Babak Hassibi

We modify the image classification task into the SLB setting and empirically show that, when a pre-trained DNN provides the high dimensional feature representations, deploying PSLB results in significant reduction of regret and faster convergence to an accurate model compared to state-of-art algorithm.

Decision Making Dimensionality Reduction +2

Neural Lander: Stable Drone Landing Control using Learned Dynamics

2 code implementations19 Nov 2018 Guanya Shi, Xichen Shi, Michael O'Connell, Rose Yu, Kamyar Azizzadenesheli, Animashree Anandkumar, Yisong Yue, Soon-Jo Chung

To the best of our knowledge, this is the first DNN-based nonlinear feedback controller with stability guarantees that can utilize arbitrarily large neural nets.

Policy Gradient in Partially Observable Environments: Approximation and Convergence

no code implementations18 Oct 2018 Kamyar Azizzadenesheli, Yisong Yue, Animashree Anandkumar

Deploying these tools, we generalize a variety of existing theoretical guarantees, such as policy gradient and convergence theorems, to partially observable domains, those which also could be carried to more settings of interest.

Decision Making Policy Gradient Methods

signSGD with Majority Vote is Communication Efficient And Fault Tolerant

3 code implementations ICLR 2019 Jeremy Bernstein, Jia-Wei Zhao, Kamyar Azizzadenesheli, Anima Anandkumar

Workers transmit only the sign of their gradient vector to a server, and the overall update is decided by a majority vote.

Benchmarking

Surprising Negative Results for Generative Adversarial Tree Search

3 code implementations ICLR 2019 Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Zachary C. Lipton, Animashree Anandkumar

We deploy this model and propose generative adversarial tree search (GATS) a deep RL algorithm that learns the environment model and implements Monte Carlo tree search (MCTS) on the learned model for planning.

Atari Games Reinforcement Learning (RL)

signSGD: Compressed Optimisation for Non-Convex Problems

3 code implementations ICML 2018 Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, Anima Anandkumar

Using a theorem by Gauss we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD.

Efficient Exploration through Bayesian Deep Q-Networks

1 code implementation ICLR 2018 Kamyar Azizzadenesheli, Animashree Anandkumar

This allows us to directly incorporate the uncertainty over the Q-function and deploy Thompson sampling on the learned posterior distribution resulting in efficient exploration/exploitation trade-off.

Atari Games Efficient Exploration +3

Experimental results : Reinforcement Learning of POMDPs using Spectral Methods

no code implementations7 May 2017 Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar

We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods.

reinforcement-learning Reinforcement Learning (RL)

Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies

no code implementations17 Aug 2016 Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar

Generally in RL, one can assume a generative model, e. g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters.

Decision Making Reinforcement Learning (RL)

Reinforcement Learning of POMDPs using Spectral Methods

no code implementations25 Feb 2016 Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar

We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.