Search Results for author: Lantao Yu

Found 31 papers, 14 papers with code

Isotropic Gaussian Splatting for Real-Time Radiance Field Rendering

no code implementations21 Mar 2024 Yuanhao Gong, Lantao Yu, Guanghui Yue

The 3D Gaussian splatting method has drawn a lot of attention, thanks to its high performance in training and high quality of the rendered image.

3D Reconstruction

Offline Imitation Learning with Suboptimal Demonstrations via Relaxed Distribution Matching

no code implementations5 Mar 2023 Lantao Yu, Tianhe Yu, Jiaming Song, Willie Neiswanger, Stefano Ermon

In this case, a well-known issue is the distribution shift between the learned policy and the behavior policy that collects the offline data.

Continuous Control Imitation Learning

Generalizing Bayesian Optimization with Decision-theoretic Entropies

no code implementations4 Oct 2022 Willie Neiswanger, Lantao Yu, Shengjia Zhao, Chenlin Meng, Stefano Ermon

Bayesian optimization (BO) is a popular method for efficiently inferring optima of an expensive black-box function via a sequence of queries.

Bayesian Optimization Decision Making

A General Recipe for Likelihood-free Bayesian Optimization

1 code implementation27 Jun 2022 Jiaming Song, Lantao Yu, Willie Neiswanger, Stefano Ermon

To extend BO to a broader class of models and utilities, we propose likelihood-free BO (LFBO), an approach based on likelihood-free inference.

Bayesian Optimization

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation

2 code implementations ICLR 2022 Minkai Xu, Lantao Yu, Yang song, Chence Shi, Stefano Ermon, Jian Tang

GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i. e., transforming from a noise distribution to stable conformations) as a Markov chain.

Drug Discovery

A Unified Framework for Multi-distribution Density Ratio Estimation

no code implementations7 Dec 2021 Lantao Yu, Yujia Jin, Stefano Ermon

Binary density ratio estimation (DRE), the problem of estimating the ratio $p_1/p_2$ given their empirical samples, provides the foundation for many state-of-the-art machine learning algorithms such as contrastive representation learning and covariate shift adaptation.

Density Ratio Estimation Representation Learning

H-Entropy Search: Generalizing Bayesian Optimization with a Decision-theoretic Uncertainty Measure

no code implementations29 Sep 2021 Willie Neiswanger, Lantao Yu, Shengjia Zhao, Chenlin Meng, Stefano Ermon

For special cases of the loss and design space, we develop gradient-based methods to efficiently optimize our proposed family of acquisition functions, and demonstrate that the resulting BO procedure shows strong empirical performance on a diverse set of optimization tasks.

Bayesian Optimization

Manifold-Inspired Single Image Interpolation

no code implementations31 Jul 2021 Lantao Yu, Kuida Liu, Michael T. Orchard

To overcome the challenge in the second part, we propose to use the aliasing-removed image to guide the initialization of the interpolated image and develop a progressive scheme to refine the interpolated image based on manifold models.

Multi-Agent Imitation Learning with Copulas

no code implementations10 Jul 2021 Hongwei Wang, Lantao Yu, Zhangjie Cao, Stefano Ermon

Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions, which is essential for understanding physical, social, and team-play systems.

Imitation Learning

Fast and High-Quality Blind Multi-Spectral Image Pansharpening

no code implementations17 Mar 2021 Lantao Yu, Dehong Liu, Hassan Mansour, Petros T. Boufounos

First, we estimate the blur kernel by computing the kernel coefficients with minimum total generalized variation that blur a downsampled version of the PAN image to approximate a linear combination of the LRMS image channels.

Image Reconstruction Pansharpening +1

Autoregressive Score Matching

no code implementations NeurIPS 2020 Chenlin Meng, Lantao Yu, Yang song, Jiaming Song, Stefano Ermon

To increase flexibility, we propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariate log-conditionals (scores), which need not be normalized.

Density Estimation Image Denoising +1

Understanding Self-supervised Learning with Dual Deep Networks

2 code implementations1 Oct 2020 Yuandong Tian, Lantao Yu, Xinlei Chen, Surya Ganguli

We propose a novel theoretical framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks (e. g., SimCLR).

Self-Supervised Learning

Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation

no code implementations12 Jul 2020 Yuxuan Song, Ning Miao, Hao Zhou, Lantao Yu, Mingxuan Wang, Lei LI

Auto-regressive sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios.

Density Ratio Estimation Diversity +1

Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip

1 code implementation3 Apr 2020 Yuxuan Song, Minkai Xu, Lantao Yu, Hao Zhou, Shuo Shao, Yong Yu

In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.

Decoder Representation Learning

Training Deep Energy-Based Models with f-Divergence Minimization

1 code implementation ICML 2020 Lantao Yu, Yang song, Jiaming Song, Stefano Ermon

Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.

Improving Unsupervised Domain Adaptation with Variational Information Bottleneck

no code implementations21 Nov 2019 Yuxuan Song, Lantao Yu, Zhangjie Cao, Zhiming Zhou, Jian Shen, Shuo Shao, Wei-Nan Zhang, Yong Yu

Domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available.

Unsupervised Domain Adaptation

Multi-Agent Adversarial Inverse Reinforcement Learning

1 code implementation30 Jul 2019 Lantao Yu, Jiaming Song, Stefano Ermon

Reinforcement learning agents are prone to undesired behaviors due to reward mis-specification.

reinforcement-learning Reinforcement Learning (RL)

Lipschitz Generative Adversarial Nets

1 code implementation15 Feb 2019 Zhiming Zhou, Jiadong Liang, Yuxuan Song, Lantao Yu, Hongwei Wang, Wei-Nan Zhang, Yong Yu, Zhihua Zhang

By contrast, Wasserstein GAN (WGAN), where the discriminative function is restricted to 1-Lipschitz, does not suffer from such a gradient uninformativeness problem.


Deep Reinforcement Learning for Green Security Games with Real-Time Information

no code implementations6 Nov 2018 Yufei Wang, Zheyuan Ryan Shi, Lantao Yu, Yi Wu, Rohit Singh, Lucas Joppa, Fei Fang

Green Security Games (GSGs) have been proposed and applied to optimize patrols conducted by law enforcement agencies in green security domains such as combating poaching, illegal logging and overfishing.

Q-Learning reinforcement-learning +1

Understanding the Effectiveness of Lipschitz-Continuity in Generative Adversarial Nets

1 code implementation2 Jul 2018 Zhiming Zhou, Yuxuan Song, Lantao Yu, Hongwei Wang, Jiadong Liang, Wei-Nan Zhang, Zhihua Zhang, Yong Yu

In this paper, we investigate the underlying factor that leads to failure and success in the training of GANs.


A Study of AI Population Dynamics with Million-agent Reinforcement Learning

no code implementations13 Sep 2017 Yaodong Yang, Lantao Yu, Yiwei Bai, Jun Wang, Wei-Nan Zhang, Ying Wen, Yong Yu

We conduct an empirical study on discovering the ordered collective dynamics obtained by a population of intelligence agents, driven by million-agent reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models

3 code implementations30 May 2017 Jun Wang, Lantao Yu, Wei-Nan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, Dell Zhang

This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair.

Document Ranking Information Retrieval +2

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

23 code implementations18 Sep 2016 Lantao Yu, Wei-Nan Zhang, Jun Wang, Yong Yu

As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data.

Reinforcement Learning (RL) Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.