Search Results for author: Richard Chen

Found 12 papers, 3 papers with code

Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction

2 code implementations13 Apr 2023 Guillaume Jaume, Anurag Vaidya, Richard Chen, Drew Williamson, Paul Liang, Faisal Mahmood

We propose fusing both modalities using a memory-efficient multimodal Transformer that can model interactions between pathway and histology patch tokens.

Survival Prediction whole slide images

Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework

1 code implementation NeurIPS 2023 Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Nicholas Allen, Randy Auerbach, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency

The recent explosion of interest in multimodal applications has resulted in a wide selection of datasets and methods for representing and integrating information from different modalities.

Model Selection

Fuzzy Dark Matter and the 21cm Power Spectrum

no code implementations18 Jan 2021 Dana Jones, Skyler Palatnick, Richard Chen, Angus Beane, Adam Lidz

If FDM constitutes the entirety of the dark matter and the FDM particle mass is 10-21eV, HERA can determine the mass to within 20 percent at 2-sigma confidence.

Cosmology and Nongalactic Astrophysics Computational Physics

COVID-19 in differential diagnosis of online symptom assessments

no code implementations7 Aug 2020 Anitha Kannan, Richard Chen, Vignesh Venkataraman, Geoffrey J. Tso, Xavier Amatriain

Traditional symptom checkers, however, are based on manually curated expert systems that are inflexible and hard to modify, especially in a quickly changing situation like the one we are facing today.

A Domain-agnostic, Noise-resistant, Hardware-efficient Evolutionary Variational Quantum Eigensolver

no code implementations21 Oct 2019 Arthur G. Rattew, Shaohan Hu, Marco Pistoia, Richard Chen, Steve Wood

Variational quantum algorithms have shown promise in numerous fields due to their versatility in solving problems of scientific and commercial interest.

Learning Montezuma's Revenge from a Single Demonstration

no code implementations8 Dec 2018 Tim Salimans, Richard Chen

We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma's Revenge.

Montezuma's Revenge reinforcement-learning +1

Rethinking Monocular Depth Estimation with Adversarial Training

no code implementations22 Aug 2018 Richard Chen, Faisal Mahmood, Alan Yuille, Nicholas J. Durr

Most existing approaches treat depth estimation as a regression problem with a local pixel-wise loss function.

Monocular Depth Estimation

Deep Learning with Cinematic Rendering: Fine-Tuning Deep Neural Networks Using Photorealistic Medical Images

no code implementations22 May 2018 Faisal Mahmood, Richard Chen, Sandra Sudarsky, Daphne Yu, Nicholas J. Durr

Our experiments demonstrate that: (a) Convolutional Neural Networks (CNNs) trained on synthetic data and fine-tuned on photorealistic cinematically rendered data adapt better to real medical images and demonstrate more robust performance when compared to networks with no fine-tuning, (b) these fine-tuned networks require less training data to converge to an optimal solution, and (c) fine-tuning with data from a variety of photorealistic rendering conditions of the same scene prevents the network from learning patient-specific information and aids in generalizability of the model.

Monocular Depth Estimation

Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training

no code implementations17 Nov 2017 Faisal Mahmood, Richard Chen, Nicholas J. Durr

We propose an alternative framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and hypothesize that clinically-relevant features can be preserved via self-regularization.

Depth Estimation Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.