Search Results for author: Chenwei Wu

Found 23 papers, 9 papers with code

Representation Learning of Lab Values via Masked AutoEncoder

1 code implementation5 Jan 2025 David Restrepo, Chenwei Wu, Yueran Jia, Jaden K. Sun, Jack Gallifant, Catherine G. Bielick, Yugang Jia, Leo A. Celi

Accurate imputation of missing laboratory values in electronic health records (EHRs) is critical to enable robust clinical predictions and reduce biases in AI systems in healthcare.

Fairness Imputation +2

Latent Space Disentanglement in Diffusion Transformers Enables Precise Zero-shot Semantic Editing

no code implementations12 Nov 2024 Zitao Shuai, Chenwei Wu, Zhengxu Tang, Bowen Song, Liyue Shen

In image editing, DiTs project text and image inputs to a joint latent space, from which they decode and synthesize new images.

Disentanglement Image Generation

Deep Learning for Personalized Electrocardiogram Diagnosis: A Review

no code implementations12 Sep 2024 Cheng Ding, Tianliang Yao, Chenwei Wu, Jianyuan Ni

The electrocardiogram (ECG) remains a fundamental tool in cardiac diagnostics, yet its interpretation traditionally reliant on the expertise of cardiologists.

Deep Learning

Latent Space Disentanglement in Diffusion Transformers Enables Zero-shot Fine-grained Semantic Editing

no code implementations23 Aug 2024 Zitao Shuai, Chenwei Wu, Zhengxu Tang, Bowen Song, Liyue Shen

Through our investigation of DiT's latent space, we have uncovered key findings that unlock the potential for zero-shot fine-grained semantic editing: (1) Both the text and image spaces in DiTs are inherently decomposable.

Disentanglement Large Language Model

Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models

no code implementations20 Aug 2024 Yuyan Chen, Chenwei Wu, Songzhou Yan, Panjun Liu, Haoyu Zhou, Yanghua Xiao

Therefore, our research introduces a benchmark to evaluate the questioning capability in education as a teacher of LLMs through evaluating their generated educational questions, utilizing Anderson and Krathwohl's taxonomy across general, monodisciplinary, and interdisciplinary domains.

Efficient In-Context Medical Segmentation with Meta-driven Visual Prompt Selection

no code implementations15 Jul 2024 Chenwei Wu, David Restrepo, Zitao Shuai, Zhongming Liu, Liyue Shen

In-context learning (ICL) with Large Vision Models (LVMs) presents a promising avenue in medical image segmentation by reducing the reliance on extensive labeling.

Image Segmentation In-Context Learning +3

Adam-mini: Use Fewer Learning Rates To Gain More

1 code implementation24 Jun 2024 Yushun Zhang, Congliang Chen, Ziniu Li, Tian Ding, Chenwei Wu, Diederik P. Kingma, Yinyu Ye, Zhi-Quan Luo, Ruoyu Sun

Adam-mini reduces memory by cutting down the learning rate resources in Adam (i. e., $1/\sqrt{v}$).

Analyzing Diversity in Healthcare LLM Research: A Scientometric Perspective

no code implementations19 Jun 2024 David Restrepo, Chenwei Wu, Constanza Vásquez-Venegas, João Matos, Jack Gallifant, Leo Anthony Celi, Danielle S. Bitterman, Luis Filipe Nakayama

The deployment of large language models (LLMs) in healthcare has demonstrated substantial potential for enhancing clinical decision-making, administrative efficiency, and patient outcomes.

Decision Making Diversity

Multimodal Deep Learning for Low-Resource Settings: A Vector Embedding Alignment Approach for Healthcare Applications

no code implementations2 Jun 2024 David Restrepo, Chenwei Wu, Sebastián Andrés Cajas, Luis Filipe Nakayama, Leo Anthony Celi, Diego M López

Our paper investigates the efficiency and effectiveness of using vector embeddings from single-modal foundation models and multi-modal Vision-Language Models (VLMs) for multimodal deep learning in low-resource environments, particularly in healthcare.

Computational Efficiency Deep Learning +1

Distributionally Robust Alignment for Medical Federated Vision-Language Pre-training Under Data Heterogeneity

no code implementations5 Apr 2024 Zitao Shuai, Chenwei Wu, Zhengxu Tang, Liyue Shen

To address this challenge, we propose Federated Distributionally Robust Alignment (FedDRA), a framework for federated VLP that achieves robust vision-language alignment under heterogeneous conditions.

cross-modal alignment Federated Learning +1

The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models

no code implementations4 Oct 2023 Chenwei Wu, Li Erran Li, Stefano Ermon, Patrick Haffner, Rong Ge, Zaiwei Zhang

Compositionality is a common property in many modalities including natural languages and images, but the compositional generalization of multi-modal models is not well-understood.

BenchMD: A Benchmark for Unified Learning on Medical Images and Sensors

1 code implementation17 Apr 2023 Kathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, Farah Dadabhoy, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan, Errol Colak, Adewole Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin, Pranav Rajpurkar

Finally, we evaluate performance on out-of-distribution data collected at different hospitals than the training data, representing naturally-occurring distribution shifts that frequently degrade the performance of medical AI models.

Self-Supervised Learning

Hiding Data Helps: On the Benefits of Masking for Sparse Coding

1 code implementation24 Feb 2023 Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge

Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective for which recovering the ground-truth dictionary is in fact optimal as the signal increases for a large class of data-generating processes.

Dictionary Learning Self-Supervised Learning

Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup

1 code implementation24 Oct 2022 Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge

Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels.

Data Augmentation Image Classification

Towards Understanding the Data Dependency of Mixup-style Training

1 code implementation ICLR 2022 Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge

Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training.

Beyond Lazy Training for Over-parameterized Tensor Decomposition

no code implementations NeurIPS 2020 Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge

We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2. 5l}\log d)$.

Tensor Decomposition

Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks

no code implementations8 Oct 2020 Yikai Wu, Xingyu Zhu, Chenwei Wu, Annie Wang, Rong Ge

We can analyze the properties of these smaller matrices and prove the structure of top eigenspace random 2-layer networks.

Generalization Bounds

Secure Data Sharing With Flow Model

1 code implementation24 Sep 2020 Chenwei Wu, Chenzhuang Du, Yang Yuan

In the classical multi-party computation setting, multiple parties jointly compute a function without revealing their own input data.

BIG-bench Machine Learning Image Classification +1

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

1 code implementation30 Jun 2020 Xiang Wang, Shuai Yuan, Chenwei Wu, Rong Ge

Solving this problem using a learning-to-learn approach -- using meta-gradient descent on a meta-objective based on the trajectory that the optimizer generates -- was recently shown to be effective.

No Spurious Local Minima in a Two Hidden Unit ReLU Network

no code implementations ICLR 2018 Chenwei Wu, Jiajun Luo, Jason D. Lee

Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.

Vocal Bursts Valence Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.