Search Results for author: Jiaming Song

Found 80 papers, 46 papers with code

Bridging the Gap Between f-GANs and Wasserstein GANs

1 code implementation ICML 2020 Jiaming Song, Stefano Ermon

Generative adversarial networks (GANs) variants approximately minimize divergences between the model and the data distribution using a discriminator.

Density Ratio Estimation Image Generation +1

AGG: Amortized Generative 3D Gaussians for Single Image to 3D

no code implementations8 Jan 2024 Dejia Xu, Ye Yuan, Morteza Mardani, Sifei Liu, Jiaming Song, Zhangyang Wang, Arash Vahdat

To overcome these challenges, we introduce an Amortized Generative 3D Gaussian framework (AGG) that instantly produces 3D Gaussians from a single image, eliminating the need for per-instance optimization.

3D Generation 3D Reconstruction +2

DiffiT: Diffusion Vision Transformers for Image Generation

1 code implementation4 Dec 2023 Ali Hatamizadeh, Jiaming Song, Guilin Liu, Jan Kautz, Arash Vahdat

In this paper, we study the effectiveness of ViTs in diffusion-based generative learning and propose a new model denoted as Diffusion Vision Transformers (DiffiT).

Denoising Image Generation

SMRD: SURE-based Robust MRI Reconstruction with Diffusion Models

2 code implementations3 Oct 2023 Batu Ozturkler, Chao Liu, Benjamin Eckart, Morteza Mardani, Jiaming Song, Jan Kautz

However, diffusion models require careful tuning of inference hyperparameters on a validation set and are still sensitive to distribution shifts during testing.

MRI Reconstruction

SSIF: Learning Continuous Image Representation for Spatial-Spectral Super-Resolution

no code implementations30 Sep 2023 Gengchen Mai, Ni Lao, Weiwei Sun, Yuchi Ma, Jiaming Song, Chenlin Meng, Hongxu Ma, Jinmeng Rao, Ziyuan Li, Stefano Ermon

Existing digital sensors capture images at fixed spatial and spectral resolutions (e. g., RGB, multispectral, and hyperspectral images), and each combination requires bespoke machine learning models.

Spectral Super-Resolution Super-Resolution

Improved Order Analysis and Design of Exponential Integrator for Diffusion Models Sampling

1 code implementation4 Aug 2023 Qinsheng Zhang, Jiaming Song, Yongxin Chen

By reformulating the differential equations in DMs and capitalizing on the theory of exponential integrators, we propose refined EI solvers that fulfill all the order conditions, which we designate as Refined Exponential Solver (RES).

Sphere2Vec: A General-Purpose Location Representation Learning over a Spherical Surface for Large-Scale Geospatial Predictions

no code implementations30 Jun 2023 Gengchen Mai, Yao Xuan, Wenyun Zuo, Yutong He, Jiaming Song, Stefano Ermon, Krzysztof Janowicz, Ni Lao

So when applied to large-scale real-world GPS coordinate datasets, which require distance metric learning on the spherical surface, both types of models can fail due to the map projection distortion problem (2D) and the spherical-to-Euclidean distance approximation error (3D).

Image Classification Metric Learning +2

A Variational Perspective on Solving Inverse Problems with Diffusion Models

1 code implementation7 May 2023 Morteza Mardani, Jiaming Song, Jan Kautz, Arash Vahdat

To cope with this challenge, we propose a variational approach that by design seeks to approximate the true posterior distribution.

Denoising Image Restoration +1

CSP: Self-Supervised Contrastive Spatial Pre-Training for Geospatial-Visual Representations

1 code implementation1 May 2023 Gengchen Mai, Ni Lao, Yutong He, Jiaming Song, Stefano Ermon

To directly leverage the abundant geospatial information associated with images in pre-training, fine-tuning, and inference stages, we present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.

Contrastive Learning Image Classification +1

DiffCollage: Parallel Generation of Large Content with Diffusion Models

no code implementations CVPR 2023 Qinsheng Zhang, Jiaming Song, Xun Huang, Yongxin Chen, Ming-Yu Liu

We present DiffCollage, a compositional diffusion model that can generate large content by leveraging diffusion models trained on generating pieces of the large content.

Infinite Image Generation Motion Generation

Seer: Language Instructed Video Prediction with Latent Diffusion Models

no code implementations27 Mar 2023 Xianfan Gu, Chuan Wen, Weirui Ye, Jiaming Song, Yang Gao

Imagining the future trajectory is the key for robots to make sound planning and successfully reach their goals.

Denoising Video Prediction

Offline Imitation Learning with Suboptimal Demonstrations via Relaxed Distribution Matching

no code implementations5 Mar 2023 Lantao Yu, Tianhe Yu, Jiaming Song, Willie Neiswanger, Stefano Ermon

In this case, a well-known issue is the distribution shift between the learned policy and the behavior policy that collects the offline data.

continuous-control Continuous Control +1

Score-based Diffusion Models in Function Space

no code implementations14 Feb 2023 Jae Hyun Lim, Nikola B. Kovachki, Ricardo Baptista, Christopher Beckham, Kamyar Azizzadenesheli, Jean Kossaifi, Vikram Voleti, Jiaming Song, Karsten Kreis, Jan Kautz, Christopher Pal, Arash Vahdat, Anima Anandkumar

They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate samples by denoising.

Denoising

PhysDiff: Physics-Guided Human Motion Diffusion Model

no code implementations ICCV 2023 Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, Jan Kautz

Specifically, we propose a physics-based motion projection module that uses motion imitation in a physics simulator to project the denoised motion of a diffusion step to a physically-plausible motion.

Denoising

Concrete Score Matching: Generalized Score Matching for Discrete Data

no code implementations2 Nov 2022 Chenlin Meng, Kristy Choi, Jiaming Song, Stefano Ermon

To this end, we propose an analogous score function called the "Concrete score", a generalization of the (Stein) score for discrete settings.

Density Estimation

A General Recipe for Likelihood-free Bayesian Optimization

1 code implementation27 Jun 2022 Jiaming Song, Lantao Yu, Willie Neiswanger, Stefano Ermon

To extend BO to a broader class of models and utilities, we propose likelihood-free BO (LFBO), an approach based on likelihood-free inference.

Bayesian Optimization

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

4 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

BigDL 2.0: Seamless Scaling of AI Pipelines from Laptops to Distributed Cluster

1 code implementation CVPR 2022 Jason Dai, Ding Ding, Dongjie Shi, Shengsheng Huang, Jiao Wang, Xin Qiu, Kai Huang, Guoqiong Song, Yang Wang, Qiyuan Gong, Jiaming Song, Shan Yu, Le Zheng, Yina Chen, Junwei Deng, Ge Song

To address this challenge, we have open sourced BigDL 2. 0 at https://github. com/intel-analytics/BigDL/ under Apache 2. 0 license (combining the original BigDL and Analytics Zoo projects); using BigDL 2. 0, users can simply build conventional Python notebooks on their laptops (with possible AutoML support), which can then be transparently accelerated on a single node (with up-to 9. 6x speedup in our experiments), and seamlessly scaled out to a large cluster (across several hundreds servers in real-world use cases).

AutoML Distributed Computing +1

Dual Diffusion Implicit Bridges for Image-to-Image Translation

1 code implementation16 Mar 2022 Xuan Su, Jiaming Song, Chenlin Meng, Stefano Ermon

Image translation with DDIBs relies on two diffusion models trained independently on each domain, and is a two-step process: DDIBs first obtain latent encodings for source images with the source diffusion model, and then decode such encodings using the target model to construct target images.

Image-to-Image Translation Translation

LISA: Learning Interpretable Skill Abstractions from Language

1 code implementation28 Feb 2022 Divyansh Garg, Skanda Vaidyanath, Kuno Kim, Jiaming Song, Stefano Ermon

Learning policies that effectively utilize language instructions in complex, multi-task environments is an important problem in sequential decision-making.

Imitation Learning Quantization +1

Denoising Diffusion Restoration Models

1 code implementation27 Jan 2022 Bahjat Kawar, Michael Elad, Stefano Ermon, Jiaming Song

Many interesting tasks in image restoration can be cast as linear inverse problems.

Colorization Deblurring +4

IS-COUNT: Large-scale Object Counting from Satellite Images with Covariate-based Importance Sampling

1 code implementation16 Dec 2021 Chenlin Meng, Enci Liu, Willie Neiswanger, Jiaming Song, Marshall Burke, David Lobell, Stefano Ermon

We show empirically that the proposed framework achieves strong performance on estimating the number of buildings in the United States and Africa, cars in Kenya, brick kilns in Bangladesh, and swimming pools in the U. S., while requiring as few as 0. 01% of satellite images compared to an exhaustive approach.

Object Object Counting +3

D2C: Diffusion-Decoding Models for Few-Shot Conditional Generation

1 code implementation NeurIPS 2021 Abhishek Sinha, Jiaming Song, Chenlin Meng, Stefano Ermon

Conditional generative models of high-dimensional images have many applications, but supervision signals from conditions to images can be expensive to acquire.

Conditional Image Generation Image Manipulation +1

Variational Automatic Curriculum Learning for Sparse-Reward Cooperative Multi-Agent Problems

1 code implementation NeurIPS 2021 Jiayu Chen, Yuanxin Zhang, Yuanfan Xu, Huimin Ma, Huazhong Yang, Jiaming Song, Yu Wang, Yi Wu

We motivate our paradigm through a variational perspective, where the learning objective can be decomposed into two terms: task learning on the current task distribution, and curriculum update to a new task distribution.

Multi-agent Reinforcement Learning

Sphere2Vec: Self-Supervised Location Representation Learning on Spherical Surfaces

no code implementations29 Sep 2021 Gengchen Mai, Yao Xuan, Wenyun Zuo, Yutong He, Stefano Ermon, Jiaming Song, Krzysztof Janowicz, Ni Lao

Location encoding is valuable for a multitude of tasks where both the absolute positions and local contexts (image, text, and other types of metadata) of spatial objects are needed for accurate predictions.

Image Classification Representation Learning +1

SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations

1 code implementation ICLR 2022 Chenlin Meng, Yutong He, Yang song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon

The key challenge is balancing faithfulness to the user input (e. g., hand-drawn colored strokes) and realism of the synthesized image.

Denoising Image Generation

CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation

4 code implementations NeurIPS 2021 Yusuke Tashiro, Jiaming Song, Yang song, Stefano Ermon

In this paper, we propose Conditional Score-based Diffusion models for Imputation (CSDI), a novel time series imputation method that utilizes score-based diffusion models conditioned on observed data.

Audio Synthesis Image Generation +4

IQ-Learn: Inverse soft-Q Learning for Imitation

5 code implementations NeurIPS 2021 Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, Matthieu Geist, Stefano Ermon

In many sequential decision-making problems (e. g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task.

Atari Games Continuous Control +4

D2C: Diffusion-Denoising Models for Few-shot Conditional Generation

3 code implementations12 Jun 2021 Abhishek Sinha, Jiaming Song, Chenlin Meng, Stefano Ermon

Conditional generative models of high-dimensional images have many applications, but supervision signals from conditions to images can be expensive to acquire.

Conditional Image Generation Denoising +2

Hybrid Mutual Information Lower-bound Estimators for Representation Learning

no code implementations ICLR Workshop Neural_Compression 2021 Abhishek Sinha, Jiaming Song, Stefano Ermon

We illustrate that with one set of representations, the hybrid approach is able to achieve good performance on multiple downstream tasks such as classification, reconstruction, and generation.

Representation Learning

Negative Data Augmentation

2 code implementations ICLR 2021 Abhishek Sinha, Kumar Ayush, Jiaming Song, Burak Uzkent, Hongxia Jin, Stefano Ermon

Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.

Action Recognition Anomaly Detection +9

H-divergence: A Decision-Theoretic Discrepancy Measure for Two Sample Tests

no code implementations1 Jan 2021 Shengjia Zhao, Abhishek Sinha, Yutong He, Aidan Perreault, Jiaming Song, Stefano Ermon

Based on ideas from decision theory, we investigate a new class of discrepancies that are based on the optimal decision loss.

Vocal Bursts Valence Prediction

Autoregressive Score Matching

no code implementations NeurIPS 2020 Chenlin Meng, Lantao Yu, Yang song, Jiaming Song, Stefano Ermon

To increase flexibility, we propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariate log-conditionals (scores), which need not be normalized.

Density Estimation Image Denoising +1

Imitation with Neural Density Models

no code implementations NeurIPS 2021 Kuno Kim, Akshat Jindal, Yang song, Jiaming Song, Yanan Sui, Stefano Ermon

We propose a new framework for Imitation Learning (IL) via density estimation of the expert's occupancy measure followed by Maximum Occupancy Entropy Reinforcement Learning (RL) using the density as a reward.

Density Estimation Imitation Learning +3

Denoising Diffusion Implicit Models

25 code implementations ICLR 2021 Jiaming Song, Chenlin Meng, Stefano Ermon

Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample.

Denoising Image Generation

Privacy Preserving Recalibration under Domain Shift

no code implementations21 Aug 2020 Rachel Luo, Shengjia Zhao, Jiaming Song, Jonathan Kuck, Stefano Ermon, Silvio Savarese

In an extensive empirical study, we find that our algorithm improves calibration on domain-shift benchmarks under the constraints of differential privacy.

Privacy Preserving

Multi-label Contrastive Predictive Coding

no code implementations NeurIPS 2020 Jiaming Song, Stefano Ermon

We demonstrate that the proposed approach is able to lead to better mutual information estimation, gain empirical improvements in unsupervised representation learning, and beat a current state-of-the-art knowledge distillation method over 10 out of 13 tasks.

Knowledge Distillation Multi-class Classification +4

Experience Replay with Likelihood-free Importance Weights

1 code implementation23 Jun 2020 Samarth Sinha, Jiaming Song, Animesh Garg, Stefano Ermon

The use of past experiences to accelerate temporal difference (TD) learning of value functions, or experience replay, is a key component in deep reinforcement learning.

Deep Reinforcement Learning OpenAI Gym +2

Robust and On-the-fly Dataset Denoising for Image Classification

no code implementations ECCV 2020 Jiaming Song, Lunjia Hu, Michael Auli, Yann Dauphin, Tengyu Ma

We address this problem by reasoning counterfactually about the loss distribution of examples with uniform random labels had they were trained with the real examples, and use this information to remove noisy examples from the training set.

Classification counterfactual +4

Training Deep Energy-Based Models with f-Divergence Minimization

1 code implementation ICML 2020 Lantao Yu, Yang song, Jiaming Song, Stefano Ermon

Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.

Gaussianization Flows

3 code implementations4 Mar 2020 Chenlin Meng, Yang song, Jiaming Song, Stefano Ermon

Iterative Gaussianization is a fixed-point iteration procedure that can transform any continuous random vector into a Gaussian one.

Permutation Invariant Graph Generation via Score-Based Generative Modeling

1 code implementation2 Mar 2020 Chenhao Niu, Yang song, Jiaming Song, Shengjia Zhao, Aditya Grover, Stefano Ermon

In particular, we design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph (a. k. a., the score function).

Graph Generation Graph Neural Network

Bridging the Gap Between $f$-GANs and Wasserstein GANs

1 code implementation22 Oct 2019 Jiaming Song, Stefano Ermon

Generative adversarial networks (GANs) have enjoyed much success in learning high-dimensional distributions.

Image Generation

Understanding the Limitations of Variational Mutual Information Estimators

1 code implementation ICLR 2020 Jiaming Song, Stefano Ermon

Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables.

Domain Adaptive Imitation Learning

1 code implementation ICML 2020 Kuno Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, Stefano Ermon

We formalize the Domain Adaptive Imitation Learning (DAIL) problem, which is a unified framework for imitation learning in the presence of viewpoint, embodiment, and dynamics mismatch.

Imitation Learning

Cross Domain Imitation Learning

no code implementations25 Sep 2019 Kun Ho Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, Stefano Ermon

Informally, CDIL is the process of learning how to perform a task optimally, given demonstrations of the task in a distinct domain.

Imitation Learning

Multi-Agent Adversarial Inverse Reinforcement Learning

1 code implementation30 Jul 2019 Lantao Yu, Jiaming Song, Stefano Ermon

Reinforcement learning agents are prone to undesired behaviors due to reward mis-specification.

reinforcement-learning Reinforcement Learning +1

Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

2 code implementations NeurIPS 2019 Aditya Grover, Jiaming Song, Alekh Agarwal, Kenneth Tran, Ashish Kapoor, Eric Horvitz, Stefano Ermon

A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions.

Data Augmentation

Better Generalization with On-the-fly Dataset Denoising

no code implementations ICLR 2019 Jiaming Song, Tengyu Ma, Michael Auli, Yann Dauphin

Memorization in over-parameterized neural networks can severely hurt generalization in the presence of mislabeled examples.

Denoising Memorization

Bias Correction of Learned Generative Models via Likelihood-free Importance Weighting

no code implementations ICLR Workshop DeepGenStruct 2019 Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric Horvitz, Stefano Ermon

A standard technique to correct this bias is by importance weighting samples from the model by the likelihood ratio under the model and true distributions.

Data Augmentation

Learning Controllable Fair Representations

3 code implementations11 Dec 2018 Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, Stefano Ermon

Learning data representations that are transferable and are fair with respect to certain protected attributes is crucial to reducing unfair decisions while preserving the utility of the data.

Fairness

Multi-Agent Generative Adversarial Imitation Learning

1 code implementation NeurIPS 2018 Jiaming Song, Hongyu Ren, Dorsa Sadigh, Stefano Ermon

Imitation learning algorithms can be used to learn a policy from expert demonstrations without access to a reward signal.

Imitation Learning reinforcement-learning +2

The Information Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Models

2 code implementations18 Jun 2018 Shengjia Zhao, Jiaming Song, Stefano Ermon

A large number of objectives have been proposed to train latent variable generative models.

Adversarial Constraint Learning for Structured Prediction

1 code implementation27 May 2018 Hongyu Ren, Russell Stewart, Jiaming Song, Volodymyr Kuleshov, Stefano Ermon

Constraint-based learning reduces the burden of collecting labels by having users specify general properties of structured outputs, such as constraints imposed by physical laws.

Pose Estimation Structured Prediction +3

Accelerating Natural Gradient with Higher-Order Invariance

2 code implementations ICML 2018 Yang Song, Jiaming Song, Stefano Ermon

An appealing property of the natural gradient is that it is invariant to arbitrary differentiable reparameterizations of the model.

Reinforcement Learning

An Empirical Analysis of Proximal Policy Optimization with Kronecker-factored Natural Gradients

no code implementations17 Jan 2018 Jiaming Song, Yuhuai Wu

In this technical report, we consider an approach that combines the PPO objective and K-FAC natural gradient optimization, for which we call PPOKFAC.

Learning Hierarchical Features from Deep Generative Models

no code implementations ICML 2017 Shengjia Zhao, Jiaming Song, Stefano Ermon

In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with existing variational methods, and provide some limitations on the kind of features existing models can learn.

A-NICE-MC: Adversarial Training for MCMC

3 code implementations NeurIPS 2017 Jiaming Song, Shengjia Zhao, Stefano Ermon

We propose A-NICE-MC, a novel method to train flexible parametric Markov chain kernels to produce samples with desired properties.

InfoVAE: Information Maximizing Variational Autoencoders

6 code implementations7 Jun 2017 Shengjia Zhao, Jiaming Song, Stefano Ermon

A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models.

InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations

4 code implementations NeurIPS 2017 Yunzhu Li, Jiaming Song, Stefano Ermon

The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal.

Imitation Learning

On the Limits of Learning Representations with Label-Based Supervision

no code implementations7 Mar 2017 Jiaming Song, Russell Stewart, Shengjia Zhao, Stefano Ermon

Advances in neural network based classifiers have transformed automatic feature learning from a pipe dream of stronger AI to a routine and expected property of practical systems.

Representation Learning Transfer Learning

Towards Deeper Understanding of Variational Autoencoding Models

2 code implementations28 Feb 2017 Shengjia Zhao, Jiaming Song, Stefano Ermon

We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound.

Learning Hierarchical Features from Generative Models

3 code implementations27 Feb 2017 Shengjia Zhao, Jiaming Song, Stefano Ermon

In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with existing variational methods, and provide some limitations on the kind of features existing models can learn.

Factored Temporal Sigmoid Belief Networks for Sequence Learning

no code implementations22 May 2016 Jiaming Song, Zhe Gan, Lawrence Carin

Deep conditional generative models are developed to simultaneously learn the temporal dependencies of multiple sequences.

General Classification

Max-Margin Nonparametric Latent Feature Models for Link Prediction

no code implementations24 Feb 2016 Jun Zhu, Jiaming Song, Bei Chen

Our approach attempts to unite the ideas of max-margin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction.

Link Prediction Variational Inference

Discriminative Nonparametric Latent Feature Relational Models with Data Augmentation

no code implementations7 Dec 2015 Bei Chen, Ning Chen, Jun Zhu, Jiaming Song, Bo Zhang

We present a discriminative nonparametric latent feature relational model (LFRM) for link prediction to automatically infer the dimensionality of latent features.

Bayesian Inference Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.