no code implementations • 31 Oct 2024 • Abhinav Kumar, Kirankumar Shiragur, Caroline Uhler

The ability to conduct interventions plays a pivotal role in learning causal relationships among variables, thus facilitating applications across diverse scientific disciplines such as genomics, economics, and machine learning.

1 code implementation • 31 Oct 2024 • Ryan Welch, JiaQi Zhang, Caroline Uhler

Causal disentanglement aims to learn about latent causal factors behind data, holding the promise to augment existing representation learning methods in terms of interpretability and extrapolation.

no code implementations • 31 Oct 2024 • Chenyu Wang, Sharut Gupta, Xinyi Zhang, Sana Tonekaboni, Stefanie Jegelka, Tommi Jaakkola, Caroline Uhler

Multimodal representation learning seeks to relate and decompose information inherent in multiple modalities.

1 code implementation • 3 Jun 2024 • Kirankumar Shiragur, JiaQi Zhang, Caroline Uhler

We show that it is possible to a learn a coarser representation of the hidden causal graph with a polynomial number of tests.

no code implementations • 29 May 2024 • Bijan Mazaheri, Chandler Squires, Caroline Uhler

A mixture model consists of a latent class that exerts a discrete signal on the observed data.

no code implementations • 25 Apr 2024 • Thomas Gaudelet, Alice Del Vecchio, Eli M Carrami, Juliana Cudini, Chantriolnt-Andreas Kapourani, Caroline Uhler, Lindsay Edwards

Interventions play a pivotal role in the study of complex biological systems.

no code implementations • 9 Mar 2024 • JiaQi Zhang, Kirankumar Shiragur, Caroline Uhler

While learning involves the task of recovering the Markov equivalence class (MEC) of the underlying causal graph from observational data, the testing counterpart addresses the following critical question: Given a specific MEC and observational data from some causal graph, can we determine if the data-generating causal graph belongs to the given MEC?

no code implementations • 22 Feb 2024 • Alvaro Ribot, Chandler Squires, Caroline Uhler

We study the index-only setting, where the actions and contexts are categorical variables with a finite number of possible values.

2 code implementations • 13 Feb 2024 • Davin Choo, Kirankumar Shiragur, Caroline Uhler

Causal graph discovery is a significant problem with applications across various disciplines.

1 code implementation • 1 Dec 2023 • Chenyu Wang, Sharut Gupta, Caroline Uhler, Tommi Jaakkola

High-throughput drug screening -- using cell imaging or gene expression measurements as readouts of drug effect -- is a critical tool in biotechnology to assess and understand the relationship between the chemical structure and biological activity of a drug.

1 code implementation • NeurIPS 2023 • Kirankumar Shiragur, JiaQi Zhang, Caroline Uhler

In our work, we focus on two such well-motivated problems: subset search and causal matching.

1 code implementation • NeurIPS 2023 • JiaQi Zhang, Chandler Squires, Kristjan Greenewald, Akash Srivastava, Karthikeyan Shanmugam, Caroline Uhler

Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model.

1 code implementation • NeurIPS 2023 • Wengong Jin, Siranush Sarkizova, Xun Chen, Nir Hacohen, Caroline Uhler

Specifically, we train an energy-based model on a set of unlabelled protein-ligand complexes using SE(3) denoising score matching and interpret its log-likelihood as binding affinity.

1 code implementation • 29 Nov 2022 • Chandler Squires, Anna Seigal, Salil Bhate, Caroline Uhler

A representation is identifiable if both the latent model and the transformation from latent to observed variables are unique.

no code implementations • 1 Nov 2022 • Adityanarayanan Radhakrishnan, Max Ruiz Luyten, Neha Prasad, Caroline Uhler

In this work, we propose a transfer learning framework for kernel methods by projecting and translating the source model to the target task.

1 code implementation • 10 Sep 2022 • JiaQi Zhang, Louis Cammarata, Chandler Squires, Themistoklis P. Sapsis, Caroline Uhler

Here, we develop a causal active learning strategy to identify interventions that are optimal, as measured by the discrepancy between the post-interventional mean of the distribution and a desired target mean.

no code implementations • 2 Jun 2022 • Chandler Squires, Caroline Uhler

In this review, we discuss approaches for learning causal structure from data, also called causal discovery.

no code implementations • 29 Apr 2022 • Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

In this work, we identify and construct an explicit set of neural network classifiers that achieve optimality.

no code implementations • 30 Dec 2021 • Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

Establishing a fast rate of convergence for optimization methods is crucial to their applicability in practice.

1 code implementation • 31 Jul 2021 • Adityanarayanan Radhakrishnan, George Stefanakis, Mikhail Belkin, Caroline Uhler

Remarkably, taking the width of a neural network to infinity allows for improved computational performance.

1 code implementation • NeurIPS 2021 • JiaQi Zhang, Chandler Squires, Caroline Uhler

In particular, we show that our strategies may require exponentially fewer interventions than the previously considered approaches, which optimize for structure learning in the underlying causal graph.

no code implementations • 29 Jun 2021 • Saachi Jain, Adityanarayanan Radhakrishnan, Caroline Uhler

Aligned latent spaces, where meaningful semantic shifts in the input space correspond to a translation in the embedding space, play an important role in the success of downstream tasks such as unsupervised clustering and data imputation.

no code implementations • CVPR 2021 • Karren Yang, Samuel Goldman, Wengong Jin, Alex X. Lu, Regina Barzilay, Tommi Jaakkola, Caroline Uhler

In this paper, we aim to synthesize cell microscopy images under different molecular interventions, motivated by practical applications to drug development.

1 code implementation • NeurIPS 2021 • Scott Sussex, Andreas Krause, Caroline Uhler

Causal structure learning is a key problem in many domains.

1 code implementation • 13 Jan 2021 • Anastasiya Belyaeva, Kaie Kubjas, Lawrence J. Sun, Caroline Uhler

A standard approach is to transform the contact frequencies into noisy distance measurements and then apply semidefinite programming (SDP) formulations to obtain the 3D configuration.

no code implementations • 1 Jan 2021 • Adityanarayanan Radhakrishnan, Neha Prasad, Caroline Uhler

While deep networks have produced state-of-the-art results in several domains from image classification to machine translation, hyper-parameter selection remains a significant computational bottleneck.

no code implementations • 6 Nov 2020 • Chandler Squires, Joshua Amaniampong, Caroline Uhler

We compare our method with $w = 1$ to algorithms for finding sparse elimination orderings of undirected graphs, and show that taking advantage of DAG-specific problem structure leads to a significant improvement in the discovered permutation.

no code implementations • 19 Oct 2020 • Eshaan Nichani, Adityanarayanan Radhakrishnan, Caroline Uhler

We then present a novel linear regression framework for characterizing the impact of depth on test risk, and show that increasing depth leads to a U-shaped test risk for the linear CNTK.

no code implementations • 16 Oct 2020 • Madeline Navarro, Yuhao Wang, Antonio G. Marques, Caroline Uhler, Santiago Segarra

Inferring graph structure from observations on the nodes is an important and popular network science task.

no code implementations • 28 Sep 2020 • Eshaan Nichani, Adityanarayanan Radhakrishnan, Caroline Uhler

Recent work provided an explanation for this phenomenon by introducing the double descent curve, showing that increasing model capacity past the interpolation threshold leads to a decrease in test error.

no code implementations • 28 Sep 2020 • Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

The following questions are fundamental to understanding the properties of over-parameterization in modern machine learning: (1) Under what conditions and at what rate does training converge to a global minimum?

no code implementations • 18 Sep 2020 • Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

GMD subsumes popular first order optimization methods including gradient descent, mirror descent, and preconditioned gradient descent methods such as Adagrad.

1 code implementation • 23 Jul 2020 • Neha Prasad, Karren Yang, Caroline Uhler

In this paper, we present Super-OT, a novel approach to computational lineage tracing that combines a supervised learning framework with optimal transport based on Generative Adversarial Networks (GANs).

1 code implementation • 24 Jun 2020 • Pantelis R. Vlachas, Georgios Arampatzis, Caroline Uhler, Petros Koumoutsakos

Here we present a novel systematic framework that bridges large scale simulations and reduced order models to Learn the Effective Dynamics (LED) of diverse complex systems.

1 code implementation • 15 Jun 2020 • Karren Yang, Samuel Goldman, Wengong Jin, Alex Lu, Regina Barzilay, Tommi Jaakkola, Caroline Uhler

In this paper, we aim to synthesize cell microscopy images under different molecular interventions, motivated by practical applications to drug development.

no code implementations • 13 Mar 2020 • Adityanarayanan Radhakrishnan, Eshaan Nichani, Daniel Bernstein, Caroline Uhler

We define alignment for fully connected networks with multidimensional outputs and show that it is a natural extension of alignment in networks with 1-dimensional outputs as defined by Ji and Telgarsky, 2018.

no code implementations • ICML 2020 • Basil Saeed, Snigdha Panigrahi, Caroline Uhler

We consider distributions arising from a mixture of causal models, where each model is represented by a directed acyclic graph (DAG).

no code implementations • 20 Oct 2019 • Daniel Irving Bernstein, Basil Saeed, Chandler Squires, Caroline Uhler

We consider the task of learning a causal graph in the presence of latent confounders given i. i. d.~samples from the model.

1 code implementation • 26 Sep 2019 • Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience.

no code implementations • 25 Sep 2019 • Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

Identifying computational mechanisms for memorization and retrieval is a long-standing problem at the intersection of machine learning and neuroscience.

no code implementations • ICLR 2019 • Adityanarayanan Radhakrishnan, Caroline Uhler, Mikhail Belkin

In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution.

no code implementations • 5 Mar 2019 • Dmitriy Katz, Karthikeyan Shanmugam, Chandler Squires, Caroline Uhler

For constant density, we show that the expected $\log$ observational MEC size asymptotically (in the number of vertices) approaches a constant.

3 code implementations • 27 Feb 2019 • Raj Agrawal, Chandler Squires, Karren Yang, Karthik Shanmugam, Caroline Uhler

Determining the causal structure of a set of variables is critical for both scientific inquiry and decision-making.

Methodology

no code implementations • 9 Feb 2019 • Karren D. Yang, Caroline Uhler

Multi-domain translation seeks to learn a probabilistic coupling between marginal distributions that reflects the correspondence between different domains.

1 code implementation • ICLR 2019 • Karren D. Yang, Caroline Uhler

Generative adversarial networks (GANs) are an expressive class of neural generative models with tremendous success in modeling high-dimensional continuous measures.

no code implementations • ICML Workshop Deep_Phenomen 2019 • Adityanarayanan Radhakrishnan, Karren Yang, Mikhail Belkin, Caroline Uhler

The ability of deep neural networks to generalize well in the overparameterized regime has become a subject of significant research interest.

no code implementations • ICML 2018 • Karren Yang, Abigail Katcoff, Caroline Uhler

We consider the problem of learning causal DAGs in the setting where both observational and interventional data is available.

1 code implementation • ICML 2018 • Raj Agrawal, Tamara Broderick, Caroline Uhler

Learning a Bayesian network (BN) from data can be useful for decision-making or discovering causal relationships.

1 code implementation • NeurIPS 2018 • Yuhao Wang, Chandler Squires, Anastasiya Belyaeva, Caroline Uhler

We consider the problem of estimating the differences between two causal directed acyclic graph (DAG) models given i. i. d.~samples from each model.

Methodology

no code implementations • NeurIPS 2017 • Yuhao Wang, Liam Solus, Karren Yang, Caroline Uhler

Learning directed acyclic graphs using both observational and interventional data is now a fundamentally important problem due to recent technological developments in genomics that generate such single-cell gene expression data at a very large scale.

no code implementations • 23 May 2017 • Adityanarayanan Radhakrishnan, Charles Durham, Ali Soylemezoglu, Caroline Uhler

Understanding how a complex machine learning model makes a classification decision is essential for its acceptance in sensitive areas such as health care.

no code implementations • 30 Jul 2014 • Fei Yu, Michal Rybar, Caroline Uhler, Stephen E. Fienberg

Following the publication of an attack on genome-wide association studies (GWAS) data proposed by Homer et al., considerable attention has been given to developing methods for releasing GWAS data in a privacy-preserving way.

no code implementations • 1 Jul 2013 • Garvesh Raskutti, Caroline Uhler

However, there is only limited work on consistency guarantees for score-based and hybrid algorithms and it has been unclear whether consistency guarantees can be proven under weaker conditions than the faithfulness assumption.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.