1 code implementation • ICML 2020 • Tianju Xue, Alex Beatson, Sigrid Adriaenssens , Ryan Adams
Optimizing the parameters of partial differential equations (PDEs), i. e., PDE-constrained optimization (PDE-CO), allows us to model natural systems from observations or perform rational design of structures with complicated mechanical, thermal, or electromagnetic properties.
no code implementations • 3 Nov 2022 • Tian Qin, Alex Beatson, Deniz Oktay, Nick McGreivy, Ryan P. Adams
Partial differential equations (PDEs) are often computationally challenging to solve, and in many settings many related PDEs must be be solved either at every timestep or for a variety of candidate boundary conditions, parameters, or geometric domains.
1 code implementation • ICLR 2021 • Deniz Oktay, Nick McGreivy, Joshua Aduol, Alex Beatson, Ryan P. Adams
The successes of deep learning, variational inference, and many other fields have been aided by specialized implementations of reverse-mode automatic differentiation (AD) to compute gradients of mega-dimensional objectives.
no code implementations • NeurIPS 2020 • Alex Beatson, Jordan T. Ash, Geoffrey Roeder, Tianju Xue, Ryan P. Adams
We use a neural network to model the stored potential energy in a component given boundary conditions.
no code implementations • ICLR 2020 • Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen
Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest.
1 code implementation • 16 May 2019 • Alex Beatson, Ryan P. Adams
We consider optimization problems in which the objective requires an inner loop with many steps or is the limit of a sequence of increasingly costly approximations.
1 code implementation • ICLR 2019 • Sachin Ravi, Alex Beatson
Meta-learning, or learning-to-learn, has proven to be a successful strategy in attacking problems in supervised learning and reinforcement learning that involve small amounts of data.
no code implementations • 23 May 2017 • Ari Seff, Alex Beatson, Daniel Suo, Han Liu
Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions.
no code implementations • NeurIPS 2016 • Alex Beatson, Zhaoran Wang, Han Liu
We study the potential of a “blind attacker” to provably limit a learner’s performance by data injection attack without observing the learner’s training set or any parameter of the distribution from which it is drawn.