Search Results for author: Miriam Cha

Found 12 papers, 3 papers with code

Bidirectional Captioning for Clinically Accurate and Interpretable Models

no code implementations30 Oct 2023 Keegan Quigley, Miriam Cha, Josh Barua, Geeticka Chauhan, Seth Berkowitz, Steven Horng, Polina Golland

Vision-language pretraining has been shown to produce high-quality visual encoders which transfer efficiently to downstream computer vision tasks.

Contrastive Learning Image Captioning

MultiEarth 2023 -- Multimodal Learning for Earth and Environment Workshop and Challenge

1 code implementation7 Jun 2023 Miriam Cha, Gregory Angelides, Mark Hamilton, Andy Soszynski, Brandon Swenson, Nathaniel Maidel, Phillip Isola, Taylor Perron, Bill Freeman

The Multimodal Learning for Earth and Environment Workshop (MultiEarth 2023) is the second annual CVPR workshop aimed at the monitoring and analysis of the health of Earth ecosystems by leveraging the vast amount of remote sensing data that is continuously being collected.

Representation Learning

RadTex: Learning Efficient Radiograph Representations from Text Reports

no code implementations5 Aug 2022 Keegan Quigley, Miriam Cha, Ruizhi Liao, Geeticka Chauhan, Steven Horng, Seth Berkowitz, Polina Golland

In this paper, we build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data (fewer than 1000 examples).

Domain Adaptation Image Captioning +2

SAR-to-EO Image Translation with Multi-Conditional Adversarial Networks

no code implementations26 Jul 2022 Armando Cabrera, Miriam Cha, Prafull Sharma, Michael Newey

This paper explores the use of multi-conditional adversarial networks for SAR-to-EO image translation.

Translation

MultiEarth 2022 -- Multimodal Learning for Earth and Environment Workshop and Challenge

no code implementations15 Apr 2022 Miriam Cha, Kuan Wei Huang, Morgan Schmidt, Gregory Angelides, Mark Hamilton, Sam Goldberg, Armando Cabrera, Phillip Isola, Taylor Perron, Bill Freeman, Yen-Chen Lin, Brandon Swenson, Jean Piou

The Multimodal Learning for Earth and Environment Challenge (MultiEarth 2022) will be the first competition aimed at the monitoring and analysis of deforestation in the Amazon rainforest at any time and in any weather conditions.

Image-to-Image Translation Matrix Completion +2

Adversarial Learning of Semantic Relevance in Text to Image Synthesis

no code implementations12 Dec 2018 Miriam Cha, Youngjune L. Gwon, H. T. Kung

Instead of selecting random training examples, we perform negative sampling based on the semantic distance from a positive example in the class.

Image Generation MS-SSIM +1

Language Modeling by Clustering with Word Embeddings for Text Readability Assessment

no code implementations5 Sep 2017 Miriam Cha, Youngjune Gwon, H. T. Kung

We argue that clustering with word embeddings in the metric space should yield feature representations in a higher semantic space appropriate for text regression.

Clustering Language Modelling +2

Adversarial nets with perceptual losses for text-to-image synthesis

no code implementations30 Aug 2017 Miriam Cha, Youngjune Gwon, H. T. Kung

Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text.

Descriptive Image Generation

Multimodal Sparse Coding for Event Detection

no code implementations17 May 2016 Youngjune Gwon, William Campbell, Kevin Brady, Douglas Sturim, Miriam Cha, H. T. Kung

Unsupervised feature learning methods have proven effective for classification tasks based on a single modality.

Classification Event Detection +1

Multimodal sparse representation learning and applications

no code implementations19 Nov 2015 Miriam Cha, Youngjune Gwon, H. T. Kung

In this paper, we present a multimodal framework for learning sparse representations that can capture semantic correlation between modalities.

Classification Dictionary Learning +7

Cannot find the paper you are looking for? You can Submit a new open access paper.