Search Results for author: Romain Lopez

Found 12 papers, 7 papers with code

A deep generative model for gene expression profiles from single-cell RNA sequencing

2 code implementations7 Sep 2017 Romain Lopez, Jeffrey Regier, Michael Cole, Michael Jordan, Nir Yosef

We also extend our framework to account for batch effects and other confounding factors, and propose a Bayesian hypothesis test for differential expression that outperforms DESeq2.

Stochastic Optimization Variational Inference

Information Constraints on Auto-Encoding Variational Bayes

no code implementations NeurIPS 2018 Romain Lopez, Jeffrey Regier, Michael. I. Jordan, Nir Yosef

We show how to apply this method to a range of problems, including the problems of learning invariant representations and the learning of interpretable representations.

Cost-Effective Incentive Allocation via Structured Counterfactual Inference

no code implementations7 Feb 2019 Romain Lopez, Chenchen Li, Xiang Yan, Junwu Xiong, Michael. I. Jordan, Yuan Qi, Le Song

We address a practical problem ubiquitous in modern marketing campaigns, in which a central agent tries to learn a policy for allocating strategic financial incentives to customers and observes only bandit feedback.

counterfactual Counterfactual Inference +2

A joint model of unpaired data from scRNA-seq and spatial transcriptomics for imputing missing gene expression measurements

2 code implementations6 May 2019 Romain Lopez, Achille Nazaret, Maxime Langevin, Jules Samaran, Jeffrey Regier, Michael. I. Jordan, Nir Yosef

Building upon domain adaptation work, we propose gimVI, a deep generative model for the integration of spatial transcriptomic data and scRNA-seq data that can be used to impute missing genes.

Domain Adaptation Imputation

Decision-Making with Auto-Encoding Variational Bayes

2 code implementations NeurIPS 2020 Romain Lopez, Pierre Boyeau, Nir Yosef, Michael. I. Jordan, Jeffrey Regier

To make decisions based on a model fit with auto-encoding variational Bayes (AEVB), practitioners often let the variational distribution serve as a surrogate for the posterior distribution.

Decision Making Two-sample testing

Learning from eXtreme Bandit Feedback

no code implementations27 Sep 2020 Romain Lopez, Inderjit S. Dhillon, Michael. I. Jordan

In POXM, the selected actions for the sIS estimator are the top-p actions of the logging policy, where p is adjusted from the data and is significantly smaller than the size of the action space.

Extreme Multi-Label Classification Recommendation Systems

Large-Scale Differentiable Causal Discovery of Factor Graphs

1 code implementation15 Jun 2022 Romain Lopez, Jan-Christian Hütter, Jonathan K. Pritchard, Aviv Regev

Combining this novel structural assumption with recent advances that bridge the gap between causal discovery and continuous optimization, we achieve causal discovery on thousands of variables.

Causal Discovery Causal Inference

Learning Causal Representations of Single Cells via Sparse Mechanism Shift Modeling

1 code implementation7 Nov 2022 Romain Lopez, Nataša Tagasovska, Stephen Ra, Kyunghyn Cho, Jonathan K. Pritchard, Aviv Regev

Instead, recent methods propose to leverage non-stationary data, as well as the sparse mechanism shift assumption in order to learn disentangled representations with a causal semantic.

Disentanglement Domain Generalization +1

NODAGS-Flow: Nonlinear Cyclic Causal Structure Learning

1 code implementation4 Jan 2023 Muralikrishnna G. Sethuraman, Romain Lopez, Rahul Mohan, Faramarz Fekri, Tommaso Biancalani, Jan-Christian Hütter

Learning causal relationships between variables is a well-studied problem in statistics, with many important applications in science.

Toward the Identifiability of Comparative Deep Generative Models

no code implementations29 Jan 2024 Romain Lopez, Jan-Christian Huetter, Ehsan Hajiramezanali, Jonathan Pritchard, Aviv Regev

Finally, we introduce a novel methodology for fitting comparative DGMs that improves the treatment of multiple data sources via multi-objective optimization and that helps adjust the hyperparameter for the regularization in an interpretable manner, using constrained optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.