no code implementations • 11 Jan 2025 • Meihua Dang, Anikait Singh, Linqi Zhou, Stefano Ermon, Jiaming Song
With PPD, a diffusion model learns the individual preferences of a population of users in a few-shot way, enabling generalization to unseen users.
1 code implementation • 24 Oct 2024 • Hansheng Chen, Bokui Shen, Yulin Liu, Ruoxi Shi, Linqi Zhou, Connor Z. Lin, Jiayuan Gu, Hao Su, Gordon Wetzstein, Leonidas Guibas
Multi-view image diffusion models have significantly advanced open-domain 3D object generation.
no code implementations • 13 May 2024 • Aaditya Prasad, Kevin Lin, Jimmy Wu, Linqi Zhou, Jeannette Bohg
Many robotic systems, such as mobile manipulators or quadrotors, cannot be equipped with high-end GPUs due to space, weight, and power constraints.
no code implementations • 6 Dec 2023 • Samar Khanna, Patrick Liu, Linqi Zhou, Chenlin Meng, Robin Rombach, Marshall Burke, David Lobell, Stefano Ermon
Our method outperforms previous state-of-the-art methods for satellite image generation and is the first large-scale generative foundation model for satellite imagery.
1 code implementation • CVPR 2024 • Linqi Zhou, Andy Shih, Chenlin Meng, Stefano Ermon
Recent methods such as Score Distillation Sampling (SDS) and Variational Score Distillation (VSD) using 2D diffusion models for text-to-3D generation have demonstrated impressive generation quality.
1 code implementation • CVPR 2024 • Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, Nikhil Naik
Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences.
4 code implementations • 29 Sep 2023 • Linqi Zhou, Aaron Lou, Samar Khanna, Stefano Ermon
However, for many applications such as image editing, the model input comes from a distribution that is not random noise.
1 code implementation • 28 Jun 2023 • Isaac Kauvar, Chris Doyle, Linqi Zhou, Nick Haber
Agents must be able to adapt quickly as an environment changes.
1 code implementation • 24 Dec 2022 • Linqi Zhou, Michael Poli, Winnie Xu, Stefano Massaroli, Stefano Ermon
Methods based on ordinary differential equations (ODEs) are widely used to build generative models of time-series.
no code implementations • 1 Dec 2022 • Gimin Nam, Mariem Khlifi, Andrew Rodriguez, Alberto Tono, Linqi Zhou, Paul Guerrero
We propose a diffusion model for neural implicit representations of 3D shapes that operates in the latent space of an auto-decoder.
no code implementations • 28 Sep 2022 • Chenlin Meng, Linqi Zhou, Kristy Choi, Tri Dao, Stefano Ermon
Normalizing flows model complex probability distributions using maps obtained by composing invertible layers.
no code implementations • 30 Sep 2021 • Luyao Yuan, Zipeng Fu, Linqi Zhou, Kexin Yang, Song-Chun Zhu
Currently, in the study of multiagent systems, the intentions of agents are usually ignored.
1 code implementation • ICCV 2021 • Linqi Zhou, Yilun Du, Jiajun Wu
We propose a novel approach for probabilistic generative modeling of 3D shapes.
1 code implementation • ACL 2020 • Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, Kewei Tu
Open-domain dialogue generation has gained increasing attention in Natural Language Processing.
no code implementations • CVPR 2020 • Tian Han, Erik Nijkamp, Linqi Zhou, Bo Pang, Song-Chun Zhu, Ying Nian Wu
This paper proposes a joint training method to learn both the variational auto-encoder (VAE) and the latent energy-based model (EBM).
no code implementations • ECCV 2020 • Erik Nijkamp, Bo Pang, Tian Han, Linqi Zhou, Song-Chun Zhu, Ying Nian Wu
Learning such a generative model requires inferring the latent variables for each training example based on the posterior distribution of these latent variables.
no code implementations • 19 Nov 2019 • Dandan Zhu, Tian Han, Linqi Zhou, Xiaokang Yang, Ying Nian Wu
We propose the clustered generator model for clustering which contains both continuous and discrete latent variables.
1 code implementation • 1 Sep 2019 • Zijun Zhang, Linqi Zhou, Liangke Gou, Ying Nian Wu
We report a neural architecture search framework, BioNAS, that is tailored for biomedical researchers to easily build, evaluate, and uncover novel knowledge from interpretable deep learning models.