Search Results for author: Richard Wang

Found 7 papers, 1 papers with code

SunCast: Solar Irradiance Nowcasting from Geosynchronous Satellite Data

no code implementations17 Jan 2022 Dhileeban Kumaresan, Richard Wang, Ernesto Martinez, Richard Cziva, Alberto Todeschini, Colorado J Reed, Hossein Vahabi

Accurate short-term PV power prediction enables operators to maximize the amount of power obtained from PV panels and safely reduce the reserve energy needed from fossil fuel sources.

MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation

no code implementations21 Feb 2021 Yen Nhi Truong Vu, Richard Wang, Niranjan Balachandar, Can Liu, Andrew Y. Ng, Pranav Rajpurkar

Our controlled experiments show that the keys to improving downstream performance on disease classification are (1) using patient metadata to appropriately create positive pairs from different images with the same underlying pathologies, and (2) maximizing the number of different images used in query pairing.

Contrastive Learning

Adversarial Attacks on Binary Image Recognition Systems

no code implementations22 Oct 2020 Eric Balkanski, Harrison Chase, Kojin Oshiba, Alexander Rilee, Yaron Singer, Richard Wang

Nevertheless, we generalize SCAR to design attacks that fool state-of-the-art check processing systems using unnoticeable perturbations that lead to misclassification of deposit amounts.

Image Classification License Plate Recognition

AdaLead: A simple and robust adaptive greedy search algorithm for sequence design

1 code implementation5 Oct 2020 Sam Sinai, Richard Wang, Alexander Whatley, Stewart Slocum, Elina Locane, Eric D. Kelsic

In this work, we implement an open-source Fitness Landscape EXploration Sandbox (FLEXS: github. com/samsinai/FLEXS) environment to test and evaluate these algorithms based on their optimality, consistency, and robustness.

Bayesian Optimization

Adversarial NLI for Factual Correctness in Text Summarisation Models

no code implementations24 May 2020 Mario Barrantes, Benedikt Herudek, Richard Wang

We show that the Transformer models fine-tuned on the new dataset achieve significantly higher accuracy and have the potential of selecting a coherent summary.

Cannot find the paper you are looking for? You can Submit a new open access paper.