1 code implementation • 5 Jan 2025 • David Restrepo, Chenwei Wu, Yueran Jia, Jaden K. Sun, Jack Gallifant, Catherine G. Bielick, Yugang Jia, Leo A. Celi
Accurate imputation of missing laboratory values in electronic health records (EHRs) is critical to enable robust clinical predictions and reduce biases in AI systems in healthcare.
no code implementations • 18 Dec 2024 • David Restrepo, Chenwei Wu, Zhengxu Tang, Zitao Shuai, Thao Nguyen Minh Phan, Jun-En Ding, Cong-Tinh Dao, Jack Gallifant, Robyn Gayle Dychiao, Jose Carlo Artiaga, André Hiroshi Bando, Carolina Pelegrini Barbosa Gracitelli, Vincenz Ferrer, Leo Anthony Celi, Danielle Bitterman, Michael G Morley, Luis Filipe Nakayama
Current ophthalmology clinical workflows are plagued by over-referrals, long waits, and complex and heterogeneous medical records.
no code implementations • 12 Nov 2024 • Zitao Shuai, Chenwei Wu, Zhengxu Tang, Bowen Song, Liyue Shen
In image editing, DiTs project text and image inputs to a joint latent space, from which they decode and synthesize new images.
no code implementations • 12 Sep 2024 • Cheng Ding, Tianliang Yao, Chenwei Wu, Jianyuan Ni
The electrocardiogram (ECG) remains a fundamental tool in cardiac diagnostics, yet its interpretation traditionally reliant on the expertise of cardiologists.
no code implementations • 23 Aug 2024 • Zitao Shuai, Chenwei Wu, Zhengxu Tang, Bowen Song, Liyue Shen
Through our investigation of DiT's latent space, we have uncovered key findings that unlock the potential for zero-shot fine-grained semantic editing: (1) Both the text and image spaces in DiTs are inherently decomposable.
no code implementations • 20 Aug 2024 • Yuyan Chen, Chenwei Wu, Songzhou Yan, Panjun Liu, Haoyu Zhou, Yanghua Xiao
Therefore, our research introduces a benchmark to evaluate the questioning capability in education as a teacher of LLMs through evaluating their generated educational questions, utilizing Anderson and Krathwohl's taxonomy across general, monodisciplinary, and interdisciplinary domains.
no code implementations • 17 Jul 2024 • Thao Minh Nguyen Phan, Cong-Tinh Dao, Chenwei Wu, Jian-Zhe Wang, Shun Liu, Jun-En Ding, David Restrepo, Feng Liu, Fang-Ming Hung, Wen-Chih Peng
Electronic health records (EHRs) are multimodal by nature, consisting of structured tabular features like lab tests and unstructured clinical notes.
no code implementations • 15 Jul 2024 • Chenwei Wu, David Restrepo, Zitao Shuai, Zhongming Liu, Liyue Shen
In-context learning (ICL) with Large Vision Models (LVMs) presents a promising avenue in medical image segmentation by reducing the reliance on extensive labeling.
1 code implementation • 24 Jun 2024 • Yushun Zhang, Congliang Chen, Ziniu Li, Tian Ding, Chenwei Wu, Diederik P. Kingma, Yinyu Ye, Zhi-Quan Luo, Ruoyu Sun
Adam-mini reduces memory by cutting down the learning rate resources in Adam (i. e., $1/\sqrt{v}$).
no code implementations • 19 Jun 2024 • David Restrepo, Chenwei Wu, Constanza Vásquez-Venegas, João Matos, Jack Gallifant, Leo Anthony Celi, Danielle S. Bitterman, Luis Filipe Nakayama
The deployment of large language models (LLMs) in healthcare has demonstrated substantial potential for enhancing clinical decision-making, administrative efficiency, and patient outcomes.
no code implementations • 2 Jun 2024 • David Restrepo, Chenwei Wu, Sebastián Andrés Cajas, Luis Filipe Nakayama, Leo Anthony Celi, Diego M López
Our paper investigates the efficiency and effectiveness of using vector embeddings from single-modal foundation models and multi-modal Vision-Language Models (VLMs) for multimodal deep learning in low-resource environments, particularly in healthcare.
1 code implementation • 18 Apr 2024 • David Restrepo, Chenwei Wu, Constanza Vásquez-Venegas, Luis Filipe Nakayama, Leo Anthony Celi, Diego M López
In the big data era, integrating diverse data modalities poses significant challenges, particularly in complex fields like healthcare.
no code implementations • 5 Apr 2024 • Zitao Shuai, Chenwei Wu, Zhengxu Tang, Liyue Shen
To address this challenge, we propose Federated Distributionally Robust Alignment (FedDRA), a framework for federated VLP that achieves robust vision-language alignment under heterogeneous conditions.
no code implementations • 4 Oct 2023 • Chenwei Wu, Li Erran Li, Stefano Ermon, Patrick Haffner, Rong Ge, Zaiwei Zhang
Compositionality is a common property in many modalities including natural languages and images, but the compositional generalization of multi-modal models is not well-understood.
1 code implementation • 17 Apr 2023 • Kathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, Farah Dadabhoy, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan, Errol Colak, Adewole Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin, Pranav Rajpurkar
Finally, we evaluate performance on out-of-distribution data collected at different hospitals than the training data, representing naturally-occurring distribution shifts that frequently degrade the performance of medical AI models.
1 code implementation • 24 Feb 2023 • Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge
Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective for which recovering the ground-truth dictionary is in fact optimal as the signal increases for a large class of data-generating processes.
1 code implementation • 24 Oct 2022 • Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge
Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels.
1 code implementation • ICLR 2022 • Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge
Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training.
no code implementations • NeurIPS 2020 • Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge
We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2. 5l}\log d)$.
no code implementations • 8 Oct 2020 • Yikai Wu, Xingyu Zhu, Chenwei Wu, Annie Wang, Rong Ge
We can analyze the properties of these smaller matrices and prove the structure of top eigenspace random 2-layer networks.
1 code implementation • 24 Sep 2020 • Chenwei Wu, Chenzhuang Du, Yang Yuan
In the classical multi-party computation setting, multiple parties jointly compute a function without revealing their own input data.
1 code implementation • 30 Jun 2020 • Xiang Wang, Shuai Yuan, Chenwei Wu, Rong Ge
Solving this problem using a learning-to-learn approach -- using meta-gradient descent on a meta-objective based on the trajectory that the optimizer generates -- was recently shown to be effective.
no code implementations • ICLR 2018 • Chenwei Wu, Jiajun Luo, Jason D. Lee
Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.