Search Results for author: Stefano Massaroli

Found 24 papers, 12 papers with code

State-Free Inference of State-Space Models: The Transfer Function Approach

1 code implementation10 May 2024 Rom N. Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T. H. Smith, Ramin Hasani, Mathias Lechner, Qi An, Christopher Ré, Hajime Asama, Stefano Ermon, Taiji Suzuki, Atsushi Yamashita, Michael Poli

We approach designing a state-space model for deep learning applications through its dual representation, the transfer function, and uncover a highly efficient sequence parallel inference algorithm that is state-free: unlike other proposed algorithms, state-free inference does not incur any significant memory or computational cost with an increase in state size.

Language Modelling

Mechanistic Design and Scaling of Hybrid Architectures

no code implementations26 Mar 2024 Michael Poli, Armin W Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Ré, Ce Zhang, Stefano Massaroli

The development of deep learning architectures is a resource-demanding process, due to a vast design space, long prototyping times, and high compute costs associated with at-scale model training and evaluation.

HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution

3 code implementations NeurIPS 2023 Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen A. Baccus, Chris Ré

Leveraging Hyena's new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level - an up to 500x increase over previous dense attention-based models.

4k In-Context Learning +2

Ideal Abstractions for Decision-Focused Learning

no code implementations29 Mar 2023 Michael Poli, Stefano Massaroli, Stefano Ermon, Bryan Wilder, Eric Horvitz

We present a methodology for formulating simplifying abstractions in machine learning systems by identifying and harnessing the utility structure of decisions.

Decision Making Management

Hyena Hierarchy: Towards Larger Convolutional Language Models

6 code implementations21 Feb 2023 Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Ré

Recent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale.

2k 8k +2

Deep Latent State Space Models for Time-Series Generation

1 code implementation24 Dec 2022 Linqi Zhou, Michael Poli, Winnie Xu, Stefano Massaroli, Stefano Ermon

Methods based on ordinary differential equations (ODEs) are widely used to build generative models of time-series.

Time Series Time Series Analysis +1

Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations

no code implementations15 Apr 2022 Michael Poli, Winnie Xu, Stefano Massaroli, Chenlin Meng, Kuno Kim, Stefano Ermon

We investigate how to leverage the representations produced by Neural Collages in various tasks, including data compression and generation.

Data Compression

Neural Solvers for Fast and Accurate Numerical Optimal Control

1 code implementation NeurIPS Workshop DLDE 2021 Federico Berto, Stefano Massaroli, Michael Poli, Jinkyoo Park

Synthesizing optimal controllers for dynamical systems often involves solving optimization problems with hard real-time constraints.

Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions

no code implementations NeurIPS 2021 Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg

Effective control and prediction of dynamical systems often require appropriate handling of continuous-time and discrete, event-triggered processes.

Learning Stochastic Optimal Policies via Gradient Descent

no code implementations7 Jun 2021 Stefano Massaroli, Michael Poli, Stefano Peluchetti, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization of parametric control policies.

Portfolio Optimization

Optimal Energy Shaping via Neural Approximators

no code implementations14 Jan 2021 Stefano Massaroli, Michael Poli, Federico Califano, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

We introduce optimal energy shaping as an enhancement of classical passivity-based control methods.

Neural Ordinary Differential Equations for Intervention Modeling

1 code implementation16 Oct 2020 Daehoon Gwak, Gyuhyeon Sim, Michael Poli, Stefano Massaroli, Jaegul Choo, Edward Choi

By interpreting the forward dynamics of the latent representation of neural networks as an ordinary differential equation, Neural Ordinary Differential Equation (Neural ODE) emerged as an effective framework for modeling a system dynamics in the continuous time domain.

Time Series Time Series Analysis

TorchDyn: A Neural Differential Equations Library

no code implementations20 Sep 2020 Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

Continuous-depth learning has recently emerged as a novel perspective on deep learning, improving performance in tasks related to dynamical systems and density estimation.

Density Estimation

Hypersolvers: Toward Fast Continuous-Depth Models

1 code implementation NeurIPS 2020 Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

The infinite-depth paradigm pioneered by Neural ODEs has launched a renaissance in the search for novel dynamical system-inspired deep learning primitives; however, their utilization in problems of non-trivial size has often proved impossible due to poor computational scalability.

Stable Neural Flows

no code implementations18 Mar 2020 Stefano Massaroli, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.

Port-Hamiltonian Gradient Flows

no code implementations ICLR Workshop DeepDiffEq 2019 Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

In this paper we present a general framework for continuous--time gradient descent, often referred to as gradient flow.

Dissecting Neural ODEs

1 code implementation NeurIPS 2020 Stefano Massaroli, Michael Poli, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

Continuous deep learning architectures have recently re-emerged as Neural Ordinary Differential Equations (Neural ODEs).

Graph Neural Ordinary Differential Equations

1 code implementation18 Nov 2019 Michael Poli, Stefano Massaroli, Junyoung Park, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

We introduce the framework of continuous--depth graph neural networks (GNNs).

Port-Hamiltonian Approach to Neural Network Training

2 code implementations6 Sep 2019 Stefano Massaroli, Michael Poli, Federico Califano, Angela Faragasso, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations.

Time Series Forecasting

Cannot find the paper you are looking for? You can Submit a new open access paper.