Search Results for author: Ruiqi Gao

Found 42 papers, 14 papers with code

SimVS: Simulating World Inconsistencies for Robust View Synthesis

no code implementations10 Dec 2024 Alex Trevithick, Roni Paiss, Philipp Henzler, Dor Verbin, Rundi Wu, Hadi AlZayer, Ruiqi Gao, Ben Poole, Jonathan T. Barron, Aleksander Holynski, Ravi Ramamoorthi, Pratul P. Srinivasan

Novel-view synthesis techniques achieve impressive results for static scenes but struggle when faced with the inconsistencies inherent to casual capture settings: varying illumination, scene motion, and other unintended effects that are difficult to model explicitly.

Novel View Synthesis

Simpler Diffusion (SiD2): 1.5 FID on ImageNet512 with pixel-space diffusion

no code implementations25 Oct 2024 Emiel Hoogeboom, Thomas Mensink, Jonathan Heek, Kay Lamerigts, Ruiqi Gao, Tim Salimans

Compared to pixel-space models that are trained end-to-end, latent models are perceived to be more efficient and to produce higher image quality at high resolution.

Image Generation

Think Twice Before You Act: Improving Inverse Problem Solving With MCMC

no code implementations13 Sep 2024 Yaxuan Zhu, Zehao Dou, Haoxin Zheng, Yasi Zhang, Ying Nian Wu, Ruiqi Gao

Despite the merits of being versatile in solving various inverse problems without re-training, the performance of DPS is hindered by the fact that this posterior approximation can be inaccurate especially for high noise levels.

Deblurring Super-Resolution

Generative Hierarchical Materials Search

no code implementations10 Sep 2024 Sherry Yang, Simon Batzner, Ruiqi Gao, Muratahan Aykol, Alexander L. Gaunt, Brendan McMorrow, Danilo J. Rezende, Dale Schuurmans, Igor Mordatch, Ekin D. Cubuk

We confirm that GenMS is able to generate common crystal structures such as double perovskites, or spinels, solely from natural language input, and hence can form the foundation for more complex structure generation in near future.

Formation Energy Graph Neural Network

Large Language Models are Limited in Out-of-Context Knowledge Reasoning

1 code implementation11 Jun 2024 Peng Hu, Changjiang Gao, Ruiqi Gao, Jiajun Chen, ShuJian Huang

Using this dataset, we evaluated several LLMs and discovered that their proficiency in this aspect is limited, regardless of whether the knowledge is trained in a separate or adjacent training settings.

Attribute Logical Reasoning +2

Latent Energy-Based Odyssey: Black-Box Optimization via Expanded Exploration in the Energy-Based Latent Space

no code implementations27 May 2024 Peiyu Yu, Dinghuai Zhang, Hengzhi He, Xiaojian Ma, Ruiyao Miao, Yifan Lu, Yasi Zhang, Deqian Kong, Ruiqi Gao, Jianwen Xie, Guang Cheng, Ying Nian Wu

To this end, we formulate an learnable energy-based latent space, and propose Noise-intensified Telescoping density-Ratio Estimation (NTRE) scheme for variational learning of an accurate latent space model without costly Markov Chain Monte Carlo.

Density Ratio Estimation

EM Distillation for One-step Diffusion Models

no code implementations27 May 2024 Sirui Xie, Zhisheng Xiao, Diederik P Kingma, Tingbo Hou, Ying Nian Wu, Kevin Patrick Murphy, Tim Salimans, Ben Poole, Ruiqi Gao

We propose EM Distillation (EMD), a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of perceptual quality.

An Investigation of Conformal Isometry Hypothesis for Grid Cells

no code implementations27 May 2024 Dehong Xu, Ruiqi Gao, Wen-Hao Zhang, Xue-Xin Wei, Ying Nian Wu

As the agent moves, this vector rotates within a 2D manifold in the neural space, driven by a recurrent neural network.

CoLay: Controllable Layout Generation through Multi-conditional Latent Diffusion

no code implementations18 May 2024 Chin-Yi Cheng, Ruiqi Gao, Forrest Huang, Yang Li

Layout design generation has recently gained significant attention due to its potential applications in various fields, including UI, graphic, and floor plan design.

Layout Design

CAT3D: Create Anything in 3D with Multi-View Diffusion Models

no code implementations16 May 2024 Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T. Barron, Ben Poole

Advances in 3D reconstruction have enabled high-quality 3D capture, but require a user to collect hundreds to thousands of images to create a 3D scene.

3D Reconstruction

MagicMirror: Fast and High-Quality Avatar Generation with a Constrained Search Space

no code implementations1 Apr 2024 Armand Comas-Massagué, Di Qiu, Menglei Chai, Marcel Bühler, Amit Raj, Ruiqi Gao, Qiangeng Xu, Mark Matthews, Paulo Gotardo, Octavia Camps, Sergio Orts-Escolano, Thabo Beeler

We introduce a novel framework for 3D human avatar generation and personalization, leveraging text prompts to enhance user engagement and customization.

Learning Energy-Based Prior Model with Diffusion-Amortized MCMC

1 code implementation NeurIPS 2023 Peiyu Yu, Yaxuan Zhu, Sirui Xie, Xiaojian Ma, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu

To remedy this sampling issue, in this paper we introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it.

valid

Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood

1 code implementation10 Sep 2023 Yaxuan Zhu, Jianwen Xie, YingNian Wu, Ruiqi Gao

Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming, and there exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.

Image Inpainting Out-of-Distribution Detection

Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells

1 code implementation6 Oct 2022 Dehong Xu, Ruiqi Gao, Wen-Hao Zhang, Xue-Xin Wei, Ying Nian Wu

Recurrent neural networks have been proposed to explain the properties of the grid cells by updating the neural activity vector based on the velocity input of the animal.

On Distillation of Guided Diffusion Models

2 code implementations CVPR 2023 Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik P. Kingma, Stefano Ermon, Jonathan Ho, Tim Salimans

For standard diffusion models trained on the pixel-space, our approach is able to generate images visually comparable to that of the original model using as few as 4 sampling steps on ImageNet 64x64 and CIFAR-10, achieving FID/IS scores comparable to that of the original model while being up to 256 times faster to sample from.

Denoising Image Generation +1

Latent Diffusion Energy-Based Model for Interpretable Text Modeling

2 code implementations13 Jun 2022 Peiyu Yu, Sirui Xie, Xiaojian Ma, Baoxiong Jia, Bo Pang, Ruiqi Gao, Yixin Zhu, Song-Chun Zhu, Ying Nian Wu

Latent space Energy-Based Models (EBMs), also known as energy-based priors, have drawn growing interests in generative modeling.

MCMC Should Mix: Learning Energy-Based Model with Flow-Based Backbone

no code implementations ICLR 2022 Erik Nijkamp, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, Ying Nian Wu

However, MCMC sampling of EBMs in high-dimensional data space is generally not mixing, because the energy function, which is usually parametrized by deep network, is highly multi-modal in the data space.

Learning Neural Representation of Camera Pose with Matrix Representation of Pose Shift via View Synthesis

1 code implementation CVPR 2021 Yaxuan Zhu, Ruiqi Gao, Siyuan Huang, Song-Chun Zhu, Ying Nian Wu

Specifically, the camera pose and 3D scene are represented as vectors and the local camera movement is represented as a matrix operating on the vector of the camera pose.

Decoder Novel View Synthesis +1

A Theory of Label Propagation for Subpopulation Shift

no code implementations22 Feb 2021 Tianle Cai, Ruiqi Gao, Jason D. Lee, Qi Lei

In this work, we propose a provably effective framework for domain adaptation based on label propagation.

Domain Adaptation Generalization Bounds

Learning Energy-Based Models by Diffusion Recovery Likelihood

2 code implementations ICLR 2021 Ruiqi Gao, Yang song, Ben Poole, Ying Nian Wu, Diederik P. Kingma

Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset.

Image Generation

A Representational Model of Grid Cells' Path Integration Based on Matrix Lie Algebras

no code implementations28 Sep 2020 Ruiqi Gao, Jianwen Xie, Xue-Xin Wei, Song-Chun Zhu, Ying Nian Wu

The grid cells in the mammalian medial entorhinal cortex exhibit striking hexagon firing patterns when the agent navigates in the open field.

Position

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

1 code implementation NeurIPS 2020 Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Li-Wei Wang, Jason D. Lee

In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call "initial tickets"), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance.

Network Pruning

On Path Integration of Grid Cells: Group Representation and Isotropic Scaling

1 code implementation NeurIPS 2021 Ruiqi Gao, Jianwen Xie, Xue-Xin Wei, Song-Chun Zhu, Ying Nian Wu

In this paper, we conduct theoretical analysis of a general representation model of path integration by grid cells, where the 2D self-position is encoded as a higher dimensional vector, and the 2D self-motion is represented by a general transformation of the vector.

Dimensionality Reduction Position

MCMC Should Mix: Learning Energy-Based Model with Neural Transport Latent Space MCMC

no code implementations12 Jun 2020 Erik Nijkamp, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, Ying Nian Wu

Learning energy-based model (EBM) requires MCMC sampling of the learned model as an inner loop of the learning algorithm.

Convergence of Adversarial Training in Overparametrized Neural Networks

no code implementations NeurIPS 2019 Ruiqi Gao, Tianle Cai, Haochuan Li, Li-Wei Wang, Cho-Jui Hsieh, Jason D. Lee

Neural networks are vulnerable to adversarial examples, i. e. inputs that are imperceptibly perturbed from natural data and yet incorrectly classified by the network.

Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems

no code implementations28 May 2019 Tianle Cai, Ruiqi Gao, Jikai Hou, Siyu Chen, Dong Wang, Di He, Zhihua Zhang, Li-Wei Wang

First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks.

regression Second-order methods

Learning V1 Simple Cells with Vector Representation of Local Content and Matrix Representation of Local Motion

no code implementations24 Jan 2019 Ruiqi Gao, Jianwen Xie, Siyuan Huang, Yufan Ren, Song-Chun Zhu, Ying Nian Wu

This paper proposes a representational model for image pairs such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1).

Optical Flow Estimation

Learning Dynamic Generator Model by Alternating Back-Propagation Through Time

no code implementations27 Dec 2018 Jianwen Xie, Ruiqi Gao, Zilong Zheng, Song-Chun Zhu, Ying Nian Wu

The non-linear transformation of this transition model can be parametrized by a feedforward neural network.

Learning Grid Cells as Vector Representation of Self-Position Coupled with Matrix Representation of Self-Motion

1 code implementation ICLR 2019 Ruiqi Gao, Jianwen Xie, Song-Chun Zhu, Ying Nian Wu

In this model, the 2D self-position of the agent is represented by a high-dimensional vector, and the 2D self-motion or displacement of the agent is represented by a matrix that transforms the vector.

Position

A Tale of Three Probabilistic Families: Discriminative, Descriptive and Generative Models

no code implementations9 Oct 2018 Ying Nian Wu, Ruiqi Gao, Tian Han, Song-Chun Zhu

In this paper, we review three families of probability models, namely, the discriminative models, the descriptive models, and the generative models.

Descriptive

Deformable Generator Networks: Unsupervised Disentanglement of Appearance and Geometry

2 code implementations16 Jun 2018 Xianglei Xing, Ruiqi Gao, Tian Han, Song-Chun Zhu, Ying Nian Wu

We present a deformable generator model to disentangle the appearance and geometric information for both image and video data in a purely unsupervised manner.

Disentanglement Transfer Learning

Learning Descriptor Networks for 3D Shape Synthesis and Analysis

1 code implementation CVPR 2018 Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, Ying Nian Wu

This paper proposes a 3D shape descriptor network, which is a deep convolutional energy-based model, for modeling volumetric shape patterns.

Object

Learning Energy-Based Models as Generative ConvNets via Multi-grid Modeling and Sampling

no code implementations CVPR 2018 Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, Ying Nian Wu

Within each iteration of our learning algorithm, for each observed training image, we generate synthesized images at multiple grids by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of the training image.

Cooperative Training of Descriptor and Generator Networks

no code implementations29 Sep 2016 Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu

Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthesized examples to initialize a finite-step MCMC that samples and trains the energy-based descriptor model.

Cannot find the paper you are looking for? You can Submit a new open access paper.