no code implementations • 29 Mar 2023 • Michael Poli, Stefano Massaroli, Stefano Ermon, Bryan Wilder, Eric Horvitz
We present a methodology for formulating simplifying abstractions in machine learning systems by identifying and harnessing the utility structure of decisions.
3 code implementations • 21 Feb 2023 • Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Ré
Recent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale.
Ranked #26 on
Language Modelling
on WikiText-103
no code implementations • 24 Dec 2022 • Linqi Zhou, Michael Poli, Winnie Xu, Stefano Massaroli, Stefano Ermon
Methods based on ordinary differential equations (ODEs) are widely used to build generative models of time-series.
1 code implementation • 26 Nov 2022 • Michael Poli, Stefano Massaroli, Federico Berto, Jinykoo Park, Tri Dao, Christopher Ré, Stefano Ermon
Instead, this work introduces a blueprint for frequency domain learning through a single transform: transform once (T1).
no code implementations • 15 Apr 2022 • Michael Poli, Winnie Xu, Stefano Massaroli, Chenlin Meng, Kuno Kim, Stefano Ermon
We investigate how to leverage the representations produced by Neural Collages in various tasks, including data compression and generation.
1 code implementation • NeurIPS Workshop DLDE 2021 • Federico Berto, Stefano Massaroli, Michael Poli, Jinkyoo Park
Synthesizing optimal controllers for dynamical systems often involves solving optimization problems with hard real-time constraints.
no code implementations • 22 Jun 2021 • Michael Poli, Stefano Massaroli, Clayton M. Rabideau, Junyoung Park, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
We introduce the framework of continuous-depth graph neural networks (GNNs).
no code implementations • NeurIPS 2021 • Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg
Effective control and prediction of dynamical systems often require appropriate handling of continuous-time and discrete, event-triggered processes.
no code implementations • NeurIPS 2021 • Stefano Massaroli, Michael Poli, Sho Sonoda, Taji Suzuki, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We detail a novel class of implicit neural models.
no code implementations • 7 Jun 2021 • Stefano Massaroli, Michael Poli, Stefano Peluchetti, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization of parametric control policies.
no code implementations • 14 Jan 2021 • Stefano Massaroli, Michael Poli, Federico Califano, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We introduce optimal energy shaping as an enhancement of classical passivity-based control methods.
1 code implementation • 16 Oct 2020 • Daehoon Gwak, Gyuhyeon Sim, Michael Poli, Stefano Massaroli, Jaegul Choo, Edward Choi
By interpreting the forward dynamics of the latent representation of neural networks as an ordinary differential equation, Neural Ordinary Differential Equation (Neural ODE) emerged as an effective framework for modeling a system dynamics in the continuous time domain.
no code implementations • 20 Sep 2020 • Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
Continuous-depth learning has recently emerged as a novel perspective on deep learning, improving performance in tasks related to dynamical systems and density estimation.
1 code implementation • NeurIPS 2020 • Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
The infinite-depth paradigm pioneered by Neural ODEs has launched a renaissance in the search for novel dynamical system-inspired deep learning primitives; however, their utilization in problems of non-trivial size has often proved impossible due to poor computational scalability.
no code implementations • 18 Mar 2020 • Stefano Massaroli, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
In this paper we present a general framework for continuous--time gradient descent, often referred to as gradient flow.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Stefano Massaroli, Michael Poli, Sanzhar Bakhtiyarov, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
Action spaces equipped with parameter sets are a common occurrence in reinforcement learning applications.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
1 code implementation • NeurIPS 2020 • Stefano Massaroli, Michael Poli, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
Continuous deep learning architectures have recently re-emerged as Neural Ordinary Differential Equations (Neural ODEs).
1 code implementation • 18 Nov 2019 • Michael Poli, Stefano Massaroli, Junyoung Park, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
We introduce the framework of continuous--depth graph neural networks (GNNs).
2 code implementations • 6 Sep 2019 • Stefano Massaroli, Michael Poli, Federico Califano, Angela Faragasso, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations.