Search Results for author: Karren Yang

Found 10 papers, 4 papers with code

Defending Multimodal Fusion Models against Single-Source Adversaries

no code implementations CVPR 2021 Karren Yang, Wan-Yi Lin, Manash Barman, Filipe Condessa, Zico Kolter

Beyond achieving high performance across many vision tasks, multimodal models are expected to be robust to single-source faults due to the availability of redundant information between modalities.

Action Recognition object-detection +2

Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis

1 code implementation CVPR 2022 Karren Yang, Dejan Markovic, Steven Krenn, Vasu Agrawal, Alexander Richard

Since facial actions such as lip movements contain significant information about speech content, it is not surprising that audio-visual speech enhancement methods are more accurate than their audio-only counterparts.

Speech Enhancement

Mol2Image: Improved Conditional Flow Models for Molecule to Image Synthesis

no code implementations CVPR 2021 Karren Yang, Samuel Goldman, Wengong Jin, Alex X. Lu, Regina Barzilay, Tommi Jaakkola, Caroline Uhler

In this paper, we aim to synthesize cell microscopy images under different molecular interventions, motivated by practical applications to drug development.

Contrastive Learning Image Generation

Optimal Transport using GANs for Lineage Tracing

1 code implementation23 Jul 2020 Neha Prasad, Karren Yang, Caroline Uhler

In this paper, we present Super-OT, a novel approach to computational lineage tracing that combines a supervised learning framework with optimal transport based on Generative Adversarial Networks (GANs).

Improved Conditional Flow Models for Molecule to Image Synthesis

1 code implementation15 Jun 2020 Karren Yang, Samuel Goldman, Wengong Jin, Alex Lu, Regina Barzilay, Tommi Jaakkola, Caroline Uhler

In this paper, we aim to synthesize cell microscopy images under different molecular interventions, motivated by practical applications to drug development.

Contrastive Learning Image Generation

Telling Left from Right: Learning Spatial Correspondence of Sight and Sound

no code implementations CVPR 2020 Karren Yang, Bryan Russell, Justin Salamon

Self-supervised audio-visual learning aims to capture useful representations of video by leveraging correspondences between visual and audio inputs.

audio-visual learning

ABCD-Strategy: Budgeted Experimental Design for Targeted Causal Structure Discovery

2 code implementations27 Feb 2019 Raj Agrawal, Chandler Squires, Karren Yang, Karthik Shanmugam, Caroline Uhler

Determining the causal structure of a set of variables is critical for both scientific inquiry and decision-making.

Methodology

Memorization in Overparameterized Autoencoders

no code implementations ICML Workshop Deep_Phenomen 2019 Adityanarayanan Radhakrishnan, Karren Yang, Mikhail Belkin, Caroline Uhler

The ability of deep neural networks to generalize well in the overparameterized regime has become a subject of significant research interest.

Inductive Bias

Characterizing and Learning Equivalence Classes of Causal DAGs under Interventions

no code implementations ICML 2018 Karren Yang, Abigail Katcoff, Caroline Uhler

We consider the problem of learning causal DAGs in the setting where both observational and interventional data is available.

Permutation-based Causal Inference Algorithms with Interventions

no code implementations NeurIPS 2017 Yuhao Wang, Liam Solus, Karren Yang, Caroline Uhler

Learning directed acyclic graphs using both observational and interventional data is now a fundamentally important problem due to recent technological developments in genomics that generate such single-cell gene expression data at a very large scale.

Causal Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.