Search Results for author: Aaron Lou

Found 12 papers, 10 papers with code

Geometric Trajectory Diffusion Models

1 code implementation16 Oct 2024 Jiaqi Han, Minkai Xu, Aaron Lou, Haotian Ye, Stefano Ermon

In this work, we propose geometric trajectory diffusion models (GeoTDM), the first diffusion model for modeling the temporal distribution of 3D geometric trajectories.

Protein Design

Equivariant Graph Neural Operator for Modeling 3D Dynamics

1 code implementation19 Jan 2024 Minkai Xu, Jiaqi Han, Aaron Lou, Jean Kossaifi, Arvind Ramanathan, Kamyar Azizzadenesheli, Jure Leskovec, Stefano Ermon, Anima Anandkumar

Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods, thanks to the equivariant temporal modeling.

Operator learning

Diffusion Model Alignment Using Direct Preference Optimization

1 code implementation CVPR 2024 Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, Nikhil Naik

Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences.

Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution

2 code implementations25 Oct 2023 Aaron Lou, Chenlin Meng, Stefano Ermon

Experimentally, we test our Score Entropy Discrete Diffusion models (SEDD) on standard language modeling tasks.

Denoising Language Modelling

Denoising Diffusion Bridge Models

2 code implementations29 Sep 2023 Linqi Zhou, Aaron Lou, Samar Khanna, Stefano Ermon

However, for many applications such as image editing, the model input comes from a distribution that is not random noise.

Denoising Image Generation

Reflected Diffusion Models

1 code implementation10 Apr 2023 Aaron Lou, Stefano Ermon

To incorporate data constraints in a principled manner, we present Reflected Diffusion Models, which instead reverse a reflected stochastic differential equation evolving on the support of the data.

 Ranked #1 on Image Generation on CIFAR-10 (Inception score metric)

Image Generation

Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks

2 code implementations NeurIPS 2021 Tolga Birdal, Aaron Lou, Leonidas Guibas, Umut Şimşekli

Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters.

Learning Theory Topological Data Analysis

Learning Complex Geometric Structures from Data with Deep Riemannian Manifolds

no code implementations29 Sep 2021 Aaron Lou, Maximilian Nickel, Mustafa Mukadam, Brandon Amos

We present Deep Riemannian Manifolds, a new class of neural network parameterized Riemannian manifolds that can represent and learn complex geometric structures.

Equivariant Manifold Flows

1 code implementation NeurIPS 2021 Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser-Nam Lim, Christopher De Sa

Tractably modelling distributions over manifolds has long been an important goal in the natural sciences.

Neural Manifold Ordinary Differential Equations

3 code implementations NeurIPS 2020 Aaron Lou, Derek Lim, Isay Katsman, Leo Huang, Qingxuan Jiang, Ser-Nam Lim, Christopher De Sa

To better conform to data geometry, recent deep generative modelling techniques adapt Euclidean constructions to non-Euclidean spaces.

Density Estimation

Differentiating through the Fréchet Mean

2 code implementations ICML 2020 Aaron Lou, Isay Katsman, Qingxuan Jiang, Serge Belongie, Ser-Nam Lim, Christopher De Sa

Recent advances in deep representation learning on Riemannian manifolds extend classical deep learning operations to better capture the geometry of the manifold.

Representation Learning

Adversarial Example Decomposition

no code implementations4 Dec 2018 Horace He, Aaron Lou, Qingxuan Jiang, Isay Katsman, Serge Belongie, Ser-Nam Lim

Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations.

Cannot find the paper you are looking for? You can Submit a new open access paper.