no code implementations • 3 Jun 2018 • Barak Oshri, Annie Hu, Peter Adelson, Xiao Chen, Pascaline Dupas, Jeremy Weinstein, Marshall Burke, David Lobell, Stefano Ermon
Our best models predict infrastructure quality with AUROC scores of 0. 881 on Electricity, 0. 862 on Sewerage, 0. 739 on Piped Water, and 0. 786 on Roads using Landsat 8.
no code implementations • NeurIPS 2018 • Rui Shu, Hung H. Bui, Shengjia Zhao, Mykel J. Kochenderfer, Stefano Ermon
In this paper, we leverage the fact that VAEs rely on amortized inference and propose techniques for amortized inference regularization (AIR) that control the smoothness of the inference model.
no code implementations • 5 Apr 2018 • Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredilla, Dale Schuurmans, Stefano Ermon
Learning latent variable models with stochastic variational inference is challenging when the approximate posterior is far from the true posterior, due to high variance in the gradient estimates.
no code implementations • 29 Mar 2018 • Aditya Grover, Todor Markov, Peter Attia, Norman Jin, Nicholas Perkins, Bryan Cheong, Michael Chen, Zi Yang, Stephen Harris, William Chueh, Stefano Ermon
We propose a generalization of the best arm identification problem in stochastic multi-armed bandits (MAB) to the setting where every pull of an arm is associated with delayed feedback.
no code implementations • 10 Dec 2017 • Stephan Eismann, Stefan Bartzsch, Stefano Ermon
Computational design optimization in fluid dynamics usually requires to solve non-linear partial differential equations numerically.
no code implementations • 21 Nov 2017 • Daniel Levy, Stefano Ermon
Our method is applicable to both discrete and continuous action spaces, when competing pathwise methods are limited to the latter.
no code implementations • NeurIPS 2017 • Volodymyr Kuleshov, Stefano Ermon
Many problems in machine learning are naturally expressed in the language of undirected graphical models.
no code implementations • 15 Nov 2017 • Huaiyang Zhong, Xiaocheng Li, David Lobell, Stefano Ermon, Margaret L. Brandeau
Eradicating hunger and malnutrition is a key development goal of the 21st century.
no code implementations • 10 Nov 2017 • Anthony Perez, Christopher Yeh, George Azzari, Marshall Burke, David Lobell, Stefano Ermon
Obtaining detailed and reliable data about local economic livelihoods in developing countries is expensive, and data are consequently scarce.
no code implementations • 11 Jul 2017 • Stephen Mussmann, Daniel Levy, Stefano Ermon
Inference in log-linear models scales linearly in the size of output space in the worst-case.
no code implementations • 7 Mar 2017 • Jiaming Song, Russell Stewart, Shengjia Zhao, Stefano Ermon
Advances in neural network based classifiers have transformed automatic feature learning from a pipe dream of stronger AI to a routine and expected property of practical systems.
no code implementations • 13 Jul 2016 • Volodymyr Kuleshov, Stefano Ermon
Assessing uncertainty is an important step towards ensuring the safety and reliability of machine learning systems.
no code implementations • NeurIPS 2016 • Yexiang Xue, Zhiyuan Li, Stefano Ermon, Carla P. Gomes, Bart Selman
Arising from many applications at the intersection of decision making and machine learning, Marginal Maximum A Posteriori (Marginal MAP) Problems unify the two main classes of inference, namely maximization (optimization) and marginal inference (counting), and are believed to have higher complexity than both of them.
no code implementations • 18 Sep 2016 • Russell Stewart, Stefano Ermon
In many machine learning applications, labeled data is scarce and obtaining more labels is expensive.
no code implementations • 17 Aug 2015 • Yexiang Xue, Stefano Ermon, Ronan Le Bras, Carla P. Gomes, Bart Selman
The ability to represent complex high dimensional probability distributions in a compact form is one of the key insights in the field of graphical models.
no code implementations • 26 May 2016 • Jonathan Ho, Jayesh K. Gupta, Stefano Ermon
In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations.
no code implementations • 5 Oct 2015 • Lun-Kai Hsu, Tudor Achim, Stefano Ermon
We show that information projections can be combined with random projections to obtain provable guarantees on the quality of the approximation obtained, regardless of the complexity of the original model.
no code implementations • 27 Nov 2014 • Stefano Ermon, Ronan Le Bras, Santosh K. Suram, John M. Gregoire, Carla Gomes, Bart Selman, Robert B. van Dover
Identifying important components or factors in large amounts of noisy data is a key problem in machine learning and data mining.
no code implementations • 26 Sep 2013 • Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman
Many probabilistic inference tasks involve summations over exponentially large sets.
no code implementations • 24 Jul 2018 • Rishi Sharma, Shane Barratt, Stefano Ermon, Vijay Pande
We demonstrate that this strategy is key to obtaining state-of-the-art results in image generation.
no code implementations • 19 Sep 2018 • Evan Sheehan, Burak Uzkent, Chenlin Meng, Zhongyi Tang, Marshall Burke, David Lobell, Stefano Ermon
Despite recent progress in computer vision, fine-grained interpretation of satellite images remains challenging because of a lack of labeled training data.
no code implementations • NeurIPS 2016 • Shengjia Zhao, Enze Zhou, Ashish Sabharwal, Stefano Ermon
A key challenge in sequential decision problems is to determine how many samples are needed for an agent to make reliable decisions with good probabilistic guarantees.
no code implementations • NeurIPS 2016 • Aditya Grover, Stefano Ermon
We provide a new approach for learning latent variable models based on optimizing our new bounds on the log-likelihood.
no code implementations • NeurIPS 2013 • Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman
We consider the problem of sampling from a probability distribution defined over a high-dimensional discrete set, specified for instance by a graphical model.
no code implementations • NeurIPS 2012 • Stefano Ermon, Ashish Sabharwal, Bart Selman, Carla P. Gomes
Given a probabilistic graphical model, its density of states is a function that, for any likelihood value, gives the number of configurations with that probability.
no code implementations • NeurIPS 2011 • Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman
We propose a novel Adaptive Markov Chain Monte Carlo algorithm to compute the partition function.
no code implementations • ICML 2017 • Shengjia Zhao, Jiaming Song, Stefano Ermon
In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with existing variational methods, and provide some limitations on the kind of features existing models can learn.
no code implementations • ICLR 2018 • Daniel Levy, Danlu Chan, Stefano Ermon
In this work, we present LSH Softmax, a method to perform sub-linear learning and inference of the softmax layer in the deep learning setting.
no code implementations • ICLR 2018 • Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, Stefano Ermon
Modern machine learning algorithms are often susceptible to adversarial examples — maliciously crafted inputs that are undetectable by humans but that fool the algorithm into producing undesirable behavior.
no code implementations • ICLR 2018 • Shengjia Zhao, Jiaming Song, Stefano Ermon
A variety of learning objectives have been recently proposed for training generative models.
no code implementations • 26 Dec 2018 • Aditya Grover, Stefano Ermon
We treat the low-dimensional projections as noisy latent representations of an autoencoder and directly learn both the acquisition (i. e., encoding) and amortized recovery (i. e., decoding) procedures.
no code implementations • 27 Feb 2019 • Rui Shu, Hung H. Bui, Jay Whang, Stefano Ermon
The recognition network in deep latent variable models such as variational autoencoders (VAEs) relies on amortized inference for efficient posterior approximation that can scale up to large datasets.
no code implementations • 13 Feb 2019 • Anthony Perez, Swetava Ganguli, Stefano Ermon, George Azzari, Marshall Burke, David Lobell
Obtaining reliable data describing local poverty metrics at a granularity that is informative to policy-makers requires expensive and logistically difficult surveys, particularly in the developing world.
no code implementations • 20 Apr 2019 • Xiao Chen, Thomas Navidi, Stefano Ermon, Ram Rajagopal
Distributed devices such as mobile phones can produce and store large amounts of data that can enhance machine learning models; however, this data may contain private information specific to the data owner that prevents the release of the data.
no code implementations • 5 May 2019 • Evan Sheehan, Chenlin Meng, Matthew Tan, Burak Uzkent, Neal Jean, David Lobell, Marshall Burke, Stefano Ermon
Progress on the UN Sustainable Development Goals (SDGs) is hampered by a persistent lack of data regarding key social, environmental, and economic indicators, particularly in developing countries.
no code implementations • 4 May 2019 • Wenjie Hu, Jay Harshadbhai Patel, Zoe-Alanah Robert, Paul Novosad, Samuel Asher, Zhongyi Tang, Marshall Burke, David Lobell, Stefano Ermon
Millions of people worldwide are absent from their country's census.
no code implementations • ICLR 2019 • Jun-Ting Hsieh, Shengjia Zhao, Stephan Eismann, Lucia Mirabella, Stefano Ermon
Partial differential equations (PDEs) are widely used across the physical and computational sciences.
no code implementations • 21 Oct 2019 • Jiaming Song, Yang song, Stefano Ermon
Based on this insight, we propose to exploit in-batch dependencies for OoD detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 30 Nov 2019 • Y. Alex Kolchinski, Sharon Zhou, Shengjia Zhao, Mitchell Gordon, Stefano Ermon
Generative models have made immense progress in recent years, particularly in their ability to generate high quality images.
no code implementations • 5 Feb 2020 • Kumar Ayush, Burak Uzkent, Marshall Burke, David Lobell, Stefano Ermon
Accurate local-level poverty measurement is an essential task for governments and humanitarian organizations to track the progress towards improving livelihoods and distribute scarce resources.
no code implementations • 11 Apr 2020 • Han Lin Aung, Burak Uzkent, Marshall Burke, David Lobell, Stefano Ermon
Using satellite imaging can be a scalable and cost effective manner to perform the task of farm parcel delineation to collect this valuable data.
no code implementations • 7 Jun 2020 • Kumar Ayush, Burak Uzkent, Kumar Tanmay, Marshall Burke, David Lobell, Stefano Ermon
The combination of high-resolution satellite imagery and machine learning have proven useful in many sustainability-related tasks, including poverty prediction, infrastructure measurement, and forest monitoring.
no code implementations • ICML 2020 • Shengjia Zhao, Tengyu Ma, Stefano Ermon
We show that calibration for individual samples is possible in the regression setup if the predictions are randomized, i. e. outputting randomized credible intervals.
1 code implementation • 18 Jun 2020 • Shengjia Zhao, Christopher Yeh, Stefano Ermon
We consider the problem of estimating confidence intervals for the mean of a random variable, where the goal is to produce the smallest possible interval for a given number of samples.
no code implementations • 29 Jun 2020 • Anusri Pampari, Stefano Ermon
A probabilistic model is said to be calibrated if its predicted probabilities match the corresponding empirical frequencies.
no code implementations • NeurIPS 2020 • Jiaming Song, Stefano Ermon
We demonstrate that the proposed approach is able to lead to better mutual information estimation, gain empirical improvements in unsupervised representation learning, and beat a current state-of-the-art knowledge distillation method over 10 out of 13 tasks.
no code implementations • 21 Aug 2020 • Rachel Luo, Shengjia Zhao, Jiaming Song, Jonathan Kuck, Stefano Ermon, Silvio Savarese
In an extensive empirical study, we find that our algorithm improves calibration on domain-shift benchmarks under the constraints of differential privacy.
no code implementations • 1 Jan 2021 • Tung Nguyen, Rui Shu, Tuan Pham, Hung Bui, Stefano Ermon
High-dimensional observations are a major challenge in the application of model-based reinforcement learning (MBRL) to real-world environments.
no code implementations • 1 Jan 2021 • Shengjia Zhao, Abhishek Sinha, Yutong He, Aidan Perreault, Jiaming Song, Stefano Ermon
Based on ideas from decision theory, we investigate a new class of discrepancies that are based on the optimal decision loss.
no code implementations • 1 Jan 2021 • Laëtitia Shao, Yang song, Stefano Ermon
Although deep neural networks are effective on supervised learning tasks, they have been shown to be brittle.
no code implementations • 5 Oct 2020 • Laëtitia Shao, Yang song, Stefano Ermon
From this observation, we develop a detection criteria for samples on which a classifier is likely to fail at test time.
no code implementations • NeurIPS 2021 • Kuno Kim, Akshat Jindal, Yang song, Jiaming Song, Yanan Sui, Stefano Ermon
We propose a new framework for Imitation Learning (IL) via density estimation of the expert's occupancy measure followed by Maximum Occupancy Entropy Reinforcement Learning (RL) using the density as a reward.
no code implementations • NeurIPS 2020 • Chenlin Meng, Lantao Yu, Yang song, Jiaming Song, Stefano Ermon
To increase flexibility, we propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariate log-conditionals (scores), which need not be normalized.
no code implementations • 15 Nov 2020 • Shengjia Zhao, Stefano Ermon
Decision makers often need to rely on imperfect probabilistic forecasts.
no code implementations • 20 Nov 2020 • Shuvam Chakraborty, Burak Uzkent, Kumar Ayush, Kumar Tanmay, Evan Sheehan, Stefano Ermon
Finally, we improve standard ImageNet pre-training by 1-3% by tuning available models on our subsets and pre-training on a dataset filtered from a larger scale dataset.
no code implementations • 30 Dec 2020 • Chris Cundy, Rishi Desai, Stefano Ermon
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
no code implementations • 15 Feb 2021 • Berivan Isik, Kristy Choi, Xin Zheng, Tsachy Weissman, Stefano Ermon, H. -S. Philip Wong, Armin Alaghi
Compression and efficient storage of neural network (NN) parameters is critical for applications that run on resource-constrained devices.
no code implementations • 22 Feb 2021 • Rachel Luo, Aadyot Bhatnagar, Yu Bai, Shengjia Zhao, Huan Wang, Caiming Xiong, Silvio Savarese, Stefano Ermon, Edward Schmerling, Marco Pavone
In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability.
no code implementations • ICLR 2021 • Chenlin Meng, Jiaming Song, Yang song, Shengjia Zhao, Stefano Ermon
While autoregressive models excel at image compression, their sample quality is often lacking.
no code implementations • NeurIPS 2021 • Mike Wu, Noah Goodman, Stefano Ermon
In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together.
no code implementations • 10 Jul 2021 • Hongwei Wang, Lantao Yu, Zhangjie Cao, Stefano Ermon
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions, which is essential for understanding physical, social, and team-play systems.
no code implementations • NeurIPS 2021 • Shengjia Zhao, Michael P. Kim, Roshni Sahoo, Tengyu Ma, Stefano Ermon
In this work, we introduce a new notion -- \emph{decision calibration} -- that requires the predicted distribution and true distribution to be ``indistinguishable'' to a set of downstream decision-makers.
no code implementations • 29 Sep 2021 • Fan-Yun Sun, Jonathan Kuck, Hao Tang, Stefano Ermon
Several indices used in a factor graph data structure can be permuted without changing the underlying probability distribution.
no code implementations • 29 Sep 2021 • Gengchen Mai, Yao Xuan, Wenyun Zuo, Yutong He, Stefano Ermon, Jiaming Song, Krzysztof Janowicz, Ni Lao
Location encoding is valuable for a multitude of tasks where both the absolute positions and local contexts (image, text, and other types of metadata) of spatial objects are needed for accurate predictions.
no code implementations • 29 Sep 2021 • Shengjia Zhao, Yusuke Tashiro, Danny Tse, Stefano Ermon
Accurate uncertainty quantification is a key building block of trustworthy machine learning systems.
no code implementations • ICLR 2022 • Viraj Mehta, Biswajit Paria, Jeff Schneider, Willie Neiswanger, Stefano Ermon
In particular, we leverage ideas from Bayesian optimal experimental design to guide the selection of state-action queries for efficient learning.
no code implementations • ICLR 2022 • Shengjia Zhao, Abhishek Sinha, Yutong He, Aidan Perreault, Jiaming Song, Stefano Ermon
Measuring the discrepancy between two probability distributions is a fundamental problem in machine learning and statistics.
no code implementations • 29 Sep 2021 • Willie Neiswanger, Lantao Yu, Shengjia Zhao, Chenlin Meng, Stefano Ermon
For special cases of the loss and design space, we develop gradient-based methods to efficiently optimize our proposed family of acquisition functions, and demonstrate that the resulting BO procedure shows strong empirical performance on a diverse set of optimization tasks.
no code implementations • 29 Sep 2021 • Rui Shu, Stefano Ermon
In this work, we consider the task of image generative modeling with variational autoencoders and posit that the nature of high-dimensional image data distributions poses an intrinsic challenge.
no code implementations • NeurIPS 2021 • Lantao Yu, Jiaming Song, Yang song, Stefano Ermon
Energy-based models (EBMs) offer flexible distribution parametrization.
no code implementations • NeurIPS 2021 • Chenlin Meng, Yang song, Wenzhe Li, Stefano Ermon
By leveraging Tweedie's formula on higher order moments, we generalize denoising score matching to estimate higher order derivatives.
no code implementations • NeurIPS 2021 • Roshni Sahoo, Shengjia Zhao, Alyssa Chen, Stefano Ermon
We propose a stronger notion of calibration called threshold calibration, which is exactly the condition required to ensure that decision loss is predicted accurately for threshold decisions.
no code implementations • 27 Sep 2018 • Kristy Choi, Kedar Tatwawadi, Tsachy Weissman, Stefano Ermon
For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric Horvitz, Stefano Ermon
A standard technique to correct this bias is by importance weighting samples from the model by the likelihood ratio under the model and true distributions.
no code implementations • 25 Sep 2019 • Shengjia Zhao, Yang song, Stefano Ermon
Our defense draws inspiration from differential privacy, and is based on intentionally adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters.
no code implementations • 25 Sep 2019 • Kun Ho Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, Stefano Ermon
Informally, CDIL is the process of learning how to perform a task optimally, given demonstrations of the task in a distinct domain.
no code implementations • ICLR Workshop Neural_Compression 2021 • Abhishek Sinha, Jiaming Song, Stefano Ermon
We illustrate that with one set of representations, the hybrid approach is able to achieve good performance on multiple downstream tasks such as classification, reconstruction, and generation.
no code implementations • NeurIPS Workshop DL-IG 2020 • Berivan Isik, Kristy Choi, Xin Zheng, H.-S. Philip Wong, Stefano Ermon, Tsachy Weissman, Armin Alaghi
Efficient compression and storage of neural network (NN) parameters is critical for resource-constrained, downstream machine learning applications.
no code implementations • 7 Dec 2021 • Lantao Yu, Yujia Jin, Stefano Ermon
Binary density ratio estimation (DRE), the problem of estimating the ratio $p_1/p_2$ given their empirical samples, provides the foundation for many state-of-the-art machine learning algorithms such as contrastive representation learning and covariate shift adaptation.
no code implementations • 12 Dec 2021 • Volodymyr Kuleshov, Evgenii Nikishin, Shantanu Thakoor, Tingfung Lau, Stefano Ermon
In this work, we seek to understand and extend adversarial examples across domains in which inputs are discrete, particularly across new domains, such as computational biology.
no code implementations • 5 Jan 2022 • Andy Shih, Stefano Ermon, Dorsa Sadigh
In this work, we study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time, and we must interact with and adapt to new partners at test time.
no code implementations • 4 Apr 2022 • Yutong He, William Zhang, Chenlin Meng, Marshall Burke, David B. Lobell, Stefano Ermon
Automated tracking of urban development in areas where construction information is not available became possible with recent advancements in machine learning and remote sensing.
no code implementations • 15 Apr 2022 • Michael Poli, Winnie Xu, Stefano Massaroli, Chenlin Meng, Kuno Kim, Stefano Ermon
We investigate how to leverage the representations produced by Neural Collages in various tasks, including data compression and generation.
no code implementations • 23 Jun 2022 • Charles Marx, Shengjia Zhao, Willie Neiswanger, Stefano Ermon
We introduce a versatile class of algorithms for recalibration in regression that we call Modular Conformal Calibration (MCC).
no code implementations • 17 Jul 2022 • Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David B. Lobell, Stefano Ermon
Unsupervised pre-training methods for large vision models have shown to enhance performance on downstream supervised tasks.
no code implementations • 10 Sep 2022 • Sara A. Miskovich, Willie Neiswanger, William Colocho, Claudio Emma, Jacqueline Garrahan, Timothy Maxwell, Christopher Mayes, Stefano Ermon, Auralee Edelen, Daniel Ratner
Traditional black-box optimizers such as Bayesian optimization are slow and inefficient when dealing with such objectives as they must acquire the full series of measurements, but return only the emittance, with each query.
no code implementations • 28 Sep 2022 • Chenlin Meng, Linqi Zhou, Kristy Choi, Tri Dao, Stefano Ermon
Normalizing flows model complex probability distributions using maps obtained by composing invertible layers.
no code implementations • 4 Oct 2022 • Willie Neiswanger, Lantao Yu, Shengjia Zhao, Chenlin Meng, Stefano Ermon
Bayesian optimization (BO) is a popular method for efficiently inferring optima of an expensive black-box function via a sequence of queries.
no code implementations • 22 Oct 2022 • Kristy Choi, Chris Cundy, Sanjari Srivastava, Stefano Ermon
Particularly in low-data regimes, an outstanding challenge in machine learning is developing principled techniques for augmenting our models with suitable priors.
no code implementations • 2 Nov 2022 • Chenlin Meng, Kristy Choi, Jiaming Song, Stefano Ermon
To this end, we propose an analogous score function called the "Concrete score", a generalization of the (Stein) score for discrete settings.
no code implementations • 4 Jan 2023 • Enci Liu, Chenlin Meng, Matthew Kolodner, Eun Jee Sung, Sihang Chen, Marshall Burke, David Lobell, Stefano Ermon
In this paper, we propose a method for estimating building coverage using only publicly available low-resolution satellite imagery that is more frequently updated.
no code implementations • 5 Mar 2023 • Lantao Yu, Tianhe Yu, Jiaming Song, Willie Neiswanger, Stefano Ermon
In this case, a well-known issue is the distribution shift between the learned policy and the behavior policy that collects the offline data.
no code implementations • 29 Mar 2023 • Michael Poli, Stefano Massaroli, Stefano Ermon, Bryan Wilder, Eric Horvitz
We present a methodology for formulating simplifying abstractions in machine learning systems by identifying and harnessing the utility structure of decisions.
no code implementations • 10 Apr 2023 • Arundhati Banerjee, Soham Phade, Stefano Ermon, Stephan Zheng
We then show that our model-based meta-learning approach is cost-effective in intervening on bandit agents with unseen explore-exploit strategies.
no code implementations • 28 Apr 2023 • Chenqing Hua, Sitao Luan, Minkai Xu, Rex Ying, Jie Fu, Stefano Ermon, Doina Precup
Our model is a promising approach for designing stable and diverse molecules and can be applied to a wide range of tasks in molecular modeling.
no code implementations • 1 May 2023 • Gengchen Mai, Ni Lao, Yutong He, Jiaming Song, Stefano Ermon
To directly leverage the abundant geospatial information associated with images in pre-training, fine-tuning, and inference stages, we present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
no code implementations • 1 Jun 2023 • Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Naoki Murata, Yuki Mitsufuji, Stefano Ermon
The emergence of various notions of ``consistency'' in diffusion models has garnered considerable attention and helped achieve improved sample quality, likelihood estimation, and accelerated sampling.
no code implementations • 8 Jun 2023 • Chris Cundy, Stefano Ermon
This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset, including divergences with weight on OOD generated sequences.
no code implementations • 30 Jun 2023 • Gengchen Mai, Yao Xuan, Wenyun Zuo, Yutong He, Jiaming Song, Stefano Ermon, Krzysztof Janowicz, Ni Lao
So when applied to large-scale real-world GPS coordinate datasets, which require distance metric learning on the spherical surface, both types of models can fail due to the map projection distortion problem (2D) and the spherical-to-Euclidean distance approximation error (3D).
no code implementations • 30 Sep 2023 • Gengchen Mai, Ni Lao, Weiwei Sun, Yuchi Ma, Jiaming Song, Chenlin Meng, Hongxu Ma, Jinmeng Rao, Ziyuan Li, Stefano Ermon
Existing digital sensors capture images at fixed spatial and spectral resolutions (e. g., RGB, multispectral, and hyperspectral images), and each combination requires bespoke machine learning models.
no code implementations • 4 Oct 2023 • Chenwei Wu, Li Erran Li, Stefano Ermon, Patrick Haffner, Rong Ge, Zaiwei Zhang
Compositionality is a common property in many modalities including natural languages and images, but the compositional generalization of multi-modal models is not well-understood.
no code implementations • 26 Oct 2023 • Gabriel Nobis, Marco Aversa, Maximilian Springenberg, Michael Detzel, Stefano Ermon, Shinichi Nakajima, Roderick Murray-Smith, Sebastian Lapuschkin, Christoph Knochenhauer, Luis Oala, Wojciech Samek
We generalize the continuous time framework for score-based generative models from an underlying Brownian motion (BM) to an approximation of fractional Brownian motion (FBM).
no code implementations • 21 Nov 2023 • Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, Nikhil Naik
Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences.
no code implementations • 28 Nov 2023 • Yutong He, Naoki Murata, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Dongjun Kim, Wei-Hsiang Liao, Yuki Mitsufuji, J. Zico Kolter, Ruslan Salakhutdinov, Stefano Ermon
Despite the recent advancements, conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training.
no code implementations • 6 Dec 2023 • Samar Khanna, Patrick Liu, Linqi Zhou, Chenlin Meng, Robin Rombach, Marshall Burke, David Lobell, Stefano Ermon
Our method outperforms previous state-of-the-art methods for satellite image generation and is the first large-scale $\textit{generative}$ foundation model for satellite imagery.
no code implementations • 19 Jan 2024 • Minkai Xu, Jiaqi Han, Aaron Lou, Jean Kossaifi, Arvind Ramanathan, Kamyar Azizzadenesheli, Jure Leskovec, Stefano Ermon, Anima Anandkumar
Modeling the complex three-dimensional (3D) dynamics of relational systems is an important problem in the natural sciences, with applications ranging from molecular simulations to particle mechanics.
no code implementations • 2 Feb 2024 • Zhuo Zheng, Yanfei Zhong, Liangpei Zhang, Stefano Ermon
Visual foundation models have achieved remarkable results in zero-shot image classification and segmentation, but zero-shot change detection remains an open problem.
no code implementations • 26 Mar 2024 • Michael Poli, Armin W Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Ré, Ce Zhang, Stefano Massaroli
The development of deep learning architectures is a resource-demanding process, due to a vast design space, long prototyping times, and high compute costs associated with at-scale model training and evaluation.
no code implementations • 28 Mar 2024 • Ryan Park, Rafael Rafailov, Stefano Ermon, Chelsea Finn
A number of approaches have been developed to control those biases in the classical RLHF literature, but the problem remains relatively under-explored for Direct Alignment Algorithms such as Direct Preference Optimization (DPO).
no code implementations • 3 Apr 2024 • Hao Li, Yang Zou, Ying Wang, Orchid Majumder, Yusheng Xie, R. Manmatha, Ashwin Swaminathan, Zhuowen Tu, Stefano Ermon, Stefano Soatto
On the data scaling side, we show the quality and diversity of the training set matters more than simply dataset size.
1 code implementation • 28 Feb 2022 • Divyansh Garg, Skanda Vaidyanath, Kuno Kim, Jiaming Song, Stefano Ermon
Learning policies that effectively utilize language instructions in complex, multi-task environments is an important problem in sequential decision-making.
1 code implementation • 23 Aug 2023 • Jonathan Xu, Amna Elmustafa, Liya Weldegebriel, Emnet Negash, Richard Lee, Chenlin Meng, Stefano Ermon, David Lobell
Small farms contribute to a large share of the productive land in developing countries.
1 code implementation • 27 Jan 2018 • Jonathan Kuck, Ashish Sabharwal, Stefano Ermon
Rademacher complexity is often used to characterize the learnability of a hypothesis class and is known to be related to the class size.
1 code implementation • NeurIPS 2019 • Jonathan Kuck, Tri Dao, Hamid Rezatofighi, Ashish Sabharwal, Stefano Ermon
Computing the permanent of a non-negative matrix is a core problem with practical applications ranging from target tracking to statistical thermodynamics.
1 code implementation • NeurIPS 2023 • Charles Marx, Sofian Zalouk, Stefano Ermon
Calibration ensures that probabilistic forecasts meaningfully capture uncertainty by requiring that predicted probabilities align with empirical frequencies.
1 code implementation • 1 Oct 2015 • Michael Xie, Neal Jean, Marshall Burke, David Lobell, Stefano Ermon
We train a fully convolutional CNN model to predict nighttime lights from daytime imagery, simultaneously learning features that are useful for poverty prediction.
1 code implementation • 5 Oct 2018 • Mike Wu, Noah Goodman, Stefano Ermon
Stochastic optimization techniques are standard in variational inference algorithms.
1 code implementation • 2 Feb 2022 • Mark Beliaev, Andy Shih, Stefano Ermon, Dorsa Sadigh, Ramtin Pedarsani
In this work, we show that unsupervised learning over demonstrator expertise can lead to a consistent boost in the performance of imitation learning algorithms.
1 code implementation • 22 Mar 2022 • Benedikt Boecking, Nicholas Roberts, Willie Neiswanger, Stefano Ermon, Frederic Sala, Artur Dubrawski
The model outperforms baseline weak supervision label models on a number of multiclass image classification datasets, improves the quality of generated images, and further improves end-model performance through data augmentation with synthetic samples.
1 code implementation • NeurIPS 2018 • Aditya Grover, Tudor Achim, Stefano Ermon
Several algorithms for solving constraint satisfaction problems are based on survey propagation, a variational inference scheme used to obtain approximate marginal probability estimates for variable assignments.
1 code implementation • 29 Sep 2022 • Gengchen Mai, Chiyu Jiang, Weiwei Sun, Rui Zhu, Yao Xuan, Ling Cai, Krzysztof Janowicz, Stefano Ermon, Ni Lao
For the spatial domain approach, we propose ResNet1D, a 1D CNN-based polygon encoder, which uses circular padding to achieve loop origin invariance on simple polygons.
1 code implementation • 12 Dec 2023 • Yuxuan Song, Jingjing Gong, Minkai Xu, Ziyao Cao, Yanyan Lan, Stefano Ermon, Hao Zhou, Wei-Ying Ma
The generation of 3D molecules requires simultaneously deciding the categorical features~(atom types) and continuous features~(atom coordinates).
2 code implementations • 13 Feb 2024 • Tailin Wu, Willie Neiswanger, Hongtao Zheng, Stefano Ermon, Jure Leskovec
Deep learning-based surrogate models have demonstrated remarkable advantages over classical solvers in terms of speed, often achieving speedups of 10 to 1000 times over traditional partial differential equation (PDE) solvers.
2 code implementations • NeurIPS 2019 • Aditya Grover, Jiaming Song, Alekh Agarwal, Kenneth Tran, Ashish Kapoor, Eric Horvitz, Stefano Ermon
A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions.
1 code implementation • 24 Dec 2022 • Linqi Zhou, Michael Poli, Winnie Xu, Stefano Massaroli, Stefano Ermon
Methods based on ordinary differential equations (ODEs) are widely used to build generative models of time-series.
3 code implementations • 7 May 2019 • Burak Uzkent, Evan Sheehan, Chenlin Meng, Zhongyi Tang, Marshall Burke, David Lobell, Stefano Ermon
Despite recent progress in computer vision, finegrained interpretation of satellite images remains challenging because of a lack of labeled training data.
1 code implementation • 23 Jun 2020 • Samarth Sinha, Jiaming Song, Animesh Garg, Stefano Ermon
The use of past experiences to accelerate temporal difference (TD) learning of value functions, or experience replay, is a key component in deep reinforcement learning.
1 code implementation • NeurIPS 2020 • Tianyu Pang, Kun Xu, Chongxuan Li, Yang song, Stefano Ermon, Jun Zhu
Several machine learning applications involve the optimization of higher-order derivatives (e. g., gradients of gradients) during training, which can be expensive in respect to memory and computation even with automatic differentiation.
1 code implementation • 22 Nov 2021 • Kristy Choi, Chenlin Meng, Yang song, Stefano Ermon
We then estimate the instantaneous rate of change of the bridge distributions indexed by time (the "time score") -- a quantity defined analogously to data (Stein) scores -- with a novel time score matching objective.
1 code implementation • 26 May 2022 • Andy Shih, Dorsa Sadigh, Stefano Ermon
Conditional inference on arbitrary subsets of variables is a core problem in probabilistic inference with important applications such as masked language modeling and image inpainting.
1 code implementation • 10 Oct 2023 • Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, Stefano Ermon
With GeoLLM, we observe that GPT-3. 5 outperforms Llama 2 and RoBERTa by 19% and 51% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset.
1 code implementation • 5 Feb 2024 • Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon
Initially, we demonstrate that LLMs are capable of making accurate zero-shot geospatial predictions in the form of ratings that show strong monotonic correlation with ground truth (Spearman's $\rho$ of up to 0. 89).
1 code implementation • NeurIPS 2021 • Andy Shih, Dorsa Sadigh, Stefano Ermon
Probabilistic circuits (PCs) are a family of generative models which allows for the computation of exact likelihoods and marginals of its probability distributions.
1 code implementation • 27 May 2018 • Hongyu Ren, Russell Stewart, Jiaming Song, Volodymyr Kuleshov, Stefano Ermon
Constraint-based learning reduces the burden of collecting labels by having users specify general properties of structured outputs, such as constraints imposed by physical laws.
1 code implementation • 5 Feb 2019 • Mike Wu, Kristy Choi, Noah Goodman, Stefano Ermon
Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training.
1 code implementation • 22 Oct 2019 • Jiaming Song, Stefano Ermon
Generative adversarial networks (GANs) have enjoyed much success in learning high-dimensional distributions.
1 code implementation • ICML 2020 • Jiaming Song, Stefano Ermon
Generative adversarial networks (GANs) variants approximately minimize divergences between the model and the data distribution using a discriminator.
1 code implementation • ICLR 2021 • Andy Shih, Arjun Sawhney, Jovana Kondic, Stefano Ermon, Dorsa Sadigh
Humans can quickly adapt to new partners in collaborative tasks (e. g. playing basketball), because they understand which fundamental skills of the task (e. g. how to dribble, how to shoot) carry over across new partners.
1 code implementation • NeurIPS 2020 • Andy Shih, Stefano Ermon
Inference in discrete graphical models with variational methods is difficult because of the inability to re-parameterize gradients of the Evidence Lower Bound (ELBO).
1 code implementation • 9 Oct 2022 • Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon
Score-based generative models (SGMs) learn a family of noise-conditional score functions corresponding to the data density perturbed with increasingly large amounts of noise.
1 code implementation • 22 Apr 2024 • Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar
Our main finding is that, in general, approaches that use on-policy sampling or attempt to push down the likelihood on certain responses (i. e., employ a "negative gradient") outperform offline and maximum likelihood objectives.
1 code implementation • ICML 2020 • Rui Shu, Tung Nguyen, Yin-Lam Chow, Tuan Pham, Khoat Than, Mohammad Ghavamzadeh, Stefano Ermon, Hung H. Bui
High-dimensional observations and unknown dynamics are major challenges when applying optimal control to many real-world decision making tasks.
1 code implementation • 7 Feb 2023 • Andy Shih, Dorsa Sadigh, Stefano Ermon
LHTS is compatible with all likelihood-based models, and optimizes for the long horizon likelihood of samples.
1 code implementation • ICML 2020 • Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, Stefano Ermon
Real-world datasets are often biased with respect to key demographic factors such as race and gender.
1 code implementation • 15 Jun 2020 • Jihyeon Lee, Dylan Grosz, Burak Uzkent, Sicheng Zeng, Marshall Burke, David Lobell, Stefano Ermon
Major decisions from governments and other large organizations rely on measurements of the populace's well-being, but making such measurements at a broad scale is expensive and thus infrequent in much of the developing world.
1 code implementation • 5 Jul 2021 • Kristy Choi, Madeline Liao, Stefano Ermon
Density ratio estimation serves as an important technique in the unsupervised machine learning toolbox.
1 code implementation • 26 Nov 2022 • Michael Poli, Stefano Massaroli, Federico Berto, Jinykoo Park, Tri Dao, Christopher Ré, Stefano Ermon
Instead, this work introduces a blueprint for frequency domain learning through a single transform: transform once (T1).
1 code implementation • 30 Jan 2023 • Naoki Murata, Koichi Saito, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon
Pre-trained diffusion models have been successfully used as priors in a variety of linear inverse problems, where the goal is to reconstruct a signal from noisy linear measurements.
1 code implementation • 14 Sep 2019 • Sawyer Birnbaum, Volodymyr Kuleshov, Zayd Enam, Pang Wei Koh, Stefano Ermon
Learning representations that accurately capture long-range dependencies in sequential inputs -- including text, audio, and genomic data -- is a key problem in deep learning.
Ranked #2 on Audio Super-Resolution on Voice Bank corpus (VCTK) (using extra training data)
1 code implementation • NeurIPS 2020 • Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, Stefano Ermon
Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems.
1 code implementation • 27 May 2023 • Zhengbang Zhu, Minghuan Liu, Liyuan Mao, Bingyi Kang, Minkai Xu, Yong Yu, Stefano Ermon, Weinan Zhang
To the best of our knowledge, MADiff is the first diffusion-based multi-agent offline RL framework, which behaves as both a decentralized policy and a centralized controller.
1 code implementation • 27 Feb 2017 • Aditya Grover, Stefano Ermon
We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes.
3 code implementations • 4 Mar 2020 • Chenlin Meng, Yang song, Jiaming Song, Stefano Ermon
Iterative Gaussianization is a fixed-point iteration procedure that can transform any continuous random vector into a Gaussian one.
1 code implementation • 9 Dec 2021 • Viraj Mehta, Biswajit Paria, Jeff Schneider, Stefano Ermon, Willie Neiswanger
In particular, we leverage ideas from Bayesian optimal experimental design to guide the selection of state-action queries for efficient learning.
1 code implementation • 6 Oct 2022 • Viraj Mehta, Ian Char, Joseph Abbate, Rory Conlin, Mark D. Boyer, Stefano Ermon, Jeff Schneider, Willie Neiswanger
In this work, we develop a method that allows us to plan for exploration while taking both the task and the current knowledge about the dynamics into account.
1 code implementation • ICLR 2018 • Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, Nate Kushman
Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models.
3 code implementations • 11 Dec 2018 • Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, Stefano Ermon
Learning data representations that are transferable and are fair with respect to certain protected attributes is crucial to reducing unfair decisions while preserving the utility of the data.
1 code implementation • ICML 2020 • Kuno Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, Stefano Ermon
We formalize the Domain Adaptive Imitation Learning (DAIL) problem, which is a unified framework for imitation learning in the presence of viewpoint, embodiment, and dynamics mismatch.
1 code implementation • ICLR 2021 • Yilun Xu, Yang song, Sahaj Garg, Linyuan Gong, Rui Shu, Aditya Grover, Stefano Ermon
Experimentally, we demonstrate in several image and audio generation tasks that sample quality degrades gracefully as we reduce the computational budget for sampling.
2 code implementations • ICLR 2021 • Abhishek Sinha, Kumar Ayush, Jiaming Song, Burak Uzkent, Hongxia Jin, Stefano Ermon
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
Ranked #6 on Image Generation on CIFAR-100
1 code implementation • 10 Feb 2020 • Yang Song, Chenlin Meng, Renjie Liao, Stefano Ermon
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
1 code implementation • NeurIPS 2021 • Chris Cundy, Aditya Grover, Stefano Ermon
We propose Bayesian Causal Discovery Nets (BCD Nets), a variational inference framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
2 code implementations • NeurIPS 2018 • Shengjia Zhao, Hongyu Ren, Arianna Yuan, Jiaming Song, Noah Goodman, Stefano Ermon
In high dimensional settings, density estimation algorithms rely crucially on their inductive bias.
1 code implementation • 16 Dec 2021 • Chenlin Meng, Enci Liu, Willie Neiswanger, Jiaming Song, Marshall Burke, David Lobell, Stefano Ermon
We show empirically that the proposed framework achieves strong performance on estimating the number of buildings in the United States and Africa, cars in Kenya, brick kilns in Bangladesh, and swimming pools in the U. S., while requiring as few as 0. 01% of satellite images compared to an exhaustive approach.
2 code implementations • 18 Jun 2018 • Shengjia Zhao, Jiaming Song, Stefano Ermon
A large number of objectives have been proposed to train latent variable generative models.
1 code implementation • NeurIPS 2021 • Yutong He, Dingjie Wang, Nicholas Lai, William Zhang, Chenlin Meng, Marshall Burke, David B. Lobell, Stefano Ermon
High-resolution satellite imagery has proven useful for a broad range of tasks, including measurement of global human population, local economic livelihoods, and biodiversity, among many others.
2 code implementations • ICML 2018 • Yang Song, Jiaming Song, Stefano Ermon
An appealing property of the natural gradient is that it is invariant to arbitrary differentiable reparameterizations of the model.
3 code implementations • 14 Jun 2021 • Tung Nguyen, Rui Shu, Tuan Pham, Hung Bui, Stefano Ermon
High-dimensional observations are a major challenge in the application of model-based reinforcement learning (MBRL) to real-world environments.
2 code implementations • ICLR Workshop DeepGenStruct 2019 • Aditya Grover, Christopher Chute, Rui Shu, Zhangjie Cao, Stefano Ermon
Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain.
1 code implementation • ICML 2020 • Lantao Yu, Yang song, Jiaming Song, Stefano Ermon
Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.
1 code implementation • ICML 2018 • Volodymyr Kuleshov, Nathan Fenner, Stefano Ermon
Methods for reasoning under uncertainty are a key building block of accurate and reliable machine learning systems.
1 code implementation • ICCV 2021 • Kumar Ayush, Burak Uzkent, Chenlin Meng, Kumar Tanmay, Marshall Burke, David Lobell, Stefano Ermon
Contrastive learning methods have significantly narrowed the gap between supervised and unsupervised learning on computer vision tasks.
Ranked #5 on Semantic Segmentation on SpaceNet 1 (using extra training data)
1 code implementation • ICLR 2021 • Sharon Zhou, Eric Zelikman, Fred Lu, Andrew Y. Ng, Gunnar Carlsson, Stefano Ermon
Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models.
1 code implementation • NeurIPS 2019 • Yang Song, Chenlin Meng, Stefano Ermon
To demonstrate their flexibility, we show that our invertible neural networks are competitive with ResNets on MNIST and CIFAR-10 classification.
Ranked #4 on Image Generation on MNIST
1 code implementation • 26 Feb 2024 • Ling Yang, Zhilong Zhang, Zhaochen Yu, Jingwei Liu, Minkai Xu, Stefano Ermon, Bin Cui
To address this issue, we propose a novel and general contextualized diffusion model (ContextDiff) by incorporating the cross-modal context encompassing interactions and alignments between text condition and visual sample into forward and reverse processes.
1 code implementation • 13 Sep 2022 • Yann Dubois, Tatsunori Hashimoto, Stefano Ermon, Percy Liang
For non-contrastive learning, we use our framework to derive a simple and novel objective.
1 code implementation • 27 Jun 2022 • Jiaming Song, Lantao Yu, Willie Neiswanger, Stefano Ermon
To extend BO to a broader class of models and utilities, we propose likelihood-free BO (LFBO), an approach based on likelihood-free inference.
1 code implementation • 19 Apr 2021 • Willie Neiswanger, Ke Alexander Wang, Stefano Ermon
Given such an $\mathcal{A}$, and a prior distribution over $f$, we refer to the problem of inferring the output of $\mathcal{A}$ using $T$ evaluations as Bayesian Algorithm Execution (BAX).
1 code implementation • 19 Nov 2018 • Kristy Choi, Kedar Tatwawadi, Aditya Grover, Tsachy Weissman, Stefano Ermon
For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes.
1 code implementation • 29 Jan 2019 • Sang Michael Xie, Stefano Ermon
Many machine learning tasks require sampling a subset of items from a collection based on a parameterized distribution.
1 code implementation • ICCV 2023 • Bram Wallace, Akash Gokul, Stefano Ermon, Nikhil Naik
Classifier guidance -- using the gradients of an image classifier to steer the generations of a diffusion model -- has the potential to dramatically expand the creative control over image generation and editing.
1 code implementation • 23 Sep 2022 • Bahjat Kawar, Jiaming Song, Stefano Ermon, Michael Elad
Diffusion models can be used as learned priors for solving various inverse problems.
2 code implementations • ICML 2018 • Manik Dhar, Aditya Grover, Stefano Ermon
In compressed sensing, a small number of linear measurements can be used to reconstruct an unknown signal.
1 code implementation • NeurIPS 2020 • Yusuke Tashiro, Yang song, Stefano Ermon
Adversarial attacks often involve random perturbations of the inputs drawn from uniform or Gaussian distributions, e. g., to initialize optimization-based white-box attacks or generate update directions in black-box attacks.
1 code implementation • ICCV 2023 • Can Qin, Ning Yu, Chen Xing, Shu Zhang, Zeyuan Chen, Stefano Ermon, Yun Fu, Caiming Xiong, ran Xu
Empirical results show that GlueNet can be trained efficiently and enables various capabilities beyond previous state-of-the-art models: 1) multilingual language models such as XLM-Roberta can be aligned with existing T2I models, allowing for the generation of high-quality images from captions beyond English; 2) GlueNet can align multi-modal encoders such as AudioCLIP with the Stable Diffusion model, enabling sound-to-image generation; 3) it can also upgrade the current text encoder of the latent diffusion model for challenging case generation.
1 code implementation • 19 Jun 2019 • Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, Stefano Ermon
Estimates of predictive uncertainty are important for accurate model-based planning and reinforcement learning.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • 28 Mar 2018 • Aditya Grover, Aaron Zweig, Stefano Ermon
Graphs are a fundamental abstraction for modeling relational data.
Ranked #8 on Link Prediction on Citeseer
1 code implementation • NeurIPS 2021 • Robin Swezey, Aditya Grover, Bruno Charron, Stefano Ermon
A key challenge with machine learning approaches for ranking is the gap between the performance metrics of interest and the surrogate loss functions that can be optimized with gradient-based methods.
1 code implementation • NeurIPS 2018 • Yang Song, Rui Shu, Nate Kushman, Stefano Ermon
Then, conditioned on a desired class, we search over the AC-GAN latent space to find images that are likely under the generative model and are misclassified by a target classifier.
3 code implementations • 14 Dec 2019 • Vishnu Sarukkai, Anirudh Jain, Burak Uzkent, Stefano Ermon
In contrast, we cast the problem of cloud removal as a conditional image synthesis challenge, and we propose a trainable spatiotemporal generator network (STGAN) to remove clouds.
Ranked #6 on Cloud Removal on SEN12MS-CR-TS
1 code implementation • NeurIPS 2023 • Alexandre Lacoste, Nils Lehmann, Pau Rodriguez, Evan David Sherwin, Hannah Kerner, Björn Lütjens, Jeremy Andrew Irvin, David Dao, Hamed Alemohammad, Alexandre Drouin, Mehmet Gunturkun, Gabriel Huang, David Vazquez, Dava Newman, Yoshua Bengio, Stefano Ermon, Xiao Xiang Zhu
Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks.
1 code implementation • ICLR 2020 • Jiaming Song, Stefano Ermon
Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables.
3 code implementations • 5 Jan 2023 • Divyansh Garg, Joey Hejna, Matthieu Geist, Stefano Ermon
Using EVT, we derive our \emph{Extreme Q-Learning} framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms, that do not explicitly require access to a policy or its entropy.
2 code implementations • CVPR 2020 • Burak Uzkent, Stefano Ermon
While high resolution images contain semantically more useful information than their lower resolution counterparts, processing them is computationally more expensive, and in some applications, e. g. remote sensing, they can be much more expensive to acquire.
1 code implementation • NeurIPS 2019 • Lantao Yu, Tianhe Yu, Chelsea Finn, Stefano Ermon
Critically, our model can infer rewards for new, structurally-similar tasks from a single demonstration.
Ranked #1 on MuJoCo Games on Sawyer Pusher
3 code implementations • 9 Dec 2019 • Burak Uzkent, Christopher Yeh, Stefano Ermon
Traditionally, an object detector is applied to every part of the scene of interest, and its accuracy and computational cost increases with higher resolution images.
1 code implementation • ICLR 2020 • Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, Stefano Ermon
We propose a new framework for reasoning about information in complex systems.
1 code implementation • NeurIPS 2018 • Neal Jean, Sang Michael Xie, Stefano Ermon
Large amounts of labeled data are typically required to train deep learning models.
1 code implementation • 16 Mar 2023 • Shu Zhang, Xinyi Yang, Yihao Feng, Can Qin, Chia-Chih Chen, Ning Yu, Zeyuan Chen, Huan Wang, Silvio Savarese, Stefano Ermon, Caiming Xiong, ran Xu
Incorporating human feedback has been shown to be crucial to align text generated by large language models to human preferences.
1 code implementation • 23 Jan 2019 • Chi-Sing Ho, Neal Jean, Catherine A. Hogan, Lena Blackmon, Stefanie S. Jeffrey, Mark Holodniy, Niaz Banaei, Amr A. E. Saleh, Stefano Ermon, Jennifer Dionne
By amassing the largest known dataset of bacterial Raman spectra, we are able to apply state-of-the-art deep learning approaches to identify 30 of the most common bacterial pathogens from noisy Raman spectra, achieving antibiotic treatment identification accuracies of 99. 0$\pm$0. 1%.