no code implementations • 18 Jul 2021 • Liangqiong Qu, Niranjan Balachandar, Daniel L Rubin
In this paper, we investigate the deleterious impact of a taxonomy of data heterogeneity regimes on federated learning methods, including quantity skew, label distribution skew, and imaging acquisition skew.
no code implementations • 24 Jun 2021 • Liangqiong Qu, Niranjan Balachandar, Miao Zhang, Daniel Rubin
Specifically, instead of directly training a model for task performance, we develop a novel dual model architecture: a primary model learns the desired task, and an auxiliary "generative replay model" allows aggregating knowledge from the heterogenous clients.
no code implementations • 21 Feb 2021 • Yen Nhi Truong Vu, Richard Wang, Niranjan Balachandar, Can Liu, Andrew Y. Ng, Pranav Rajpurkar
Our controlled experiments show that the keys to improving downstream performance on disease classification are (1) using patient metadata to appropriately create positive pairs from different images with the same underlying pathologies, and (2) maximizing the number of different images used in query pairing.
no code implementations • 30 Jun 2019 • Niranjan Balachandar, Christine Liu, Winston Wang
The current state of cancer therapeutics has been moving away from one-size-fits-all cytotoxic chemotherapy, and towards a more individualized and specific approach involving the targeting of each tumor's genetic vulnerabilities.
no code implementations • 30 Jun 2019 • Niranjan Balachandar, Justin Dieter, Govardana Sachithanandam Ramachandran
We train and evaluate our multi-agent methods against a team operating with a smart hand-coded policy.
no code implementations • 10 Sep 2017 • Ken Chang, Niranjan Balachandar, Carson K Lam, Darvin Yi, James M. Brown, Andrew Beers, Bruce R. Rosen, Daniel L. Rubin, Jayashree Kalpathy-Cramer
In such cases, sharing a deep learning model is a more attractive alternative.