no code implementations • 30 Nov 2024 • Yan Wang, Jimin Huang, Huan He, Vincent Zhang, Yujia Zhou, Xubing Hao, Pritham Ram, Lingfei Qian, Qianqian Xie, Ruey-Ling Weng, Fongci Lin, Yan Hu, Licong Cui, Xiaoqian Jiang, Hua Xu, Na Hong
We propose CDEMapper, a large language model (LLM) powered mapping tool designed to assist in mapping local data elements to NIH CDEs.
1 code implementation • 11 Jun 2024 • Lu Li, Tianyu Zhang, Zhiqi Bu, Suyuchen Wang, Huan He, Jie Fu, Yonghui Wu, Jiang Bian, Yong Chen, Yoshua Bengio
MAP efficiently identifies a Pareto set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
1 code implementation • 20 Feb 2024 • Qianqian Xie, Qingyu Chen, Aokun Chen, Cheng Peng, Yan Hu, Fongci Lin, Xueqing Peng, Jimin Huang, Jeffrey Zhang, Vipina Keloth, Xinyu Zhou, Lingfei Qian, Huan He, Dennis Shung, Lucila Ohno-Machado, Yonghui Wu, Hua Xu, Jiang Bian
This work underscores the importance of domain-specific data in developing medical LLMs and addresses the high computational costs involved in training, highlighting a balance between pre-training and fine-tuning strategies.
no code implementations • 15 Jul 2023 • Ru Huang, Kai Chang, Huan He, Ruipeng Li, Yuanzhe Xi
We propose a data-driven and machine-learning-based approach to compute non-Galerkin coarse-grid operators in algebraic multigrid (AMG) methods, addressing the well-known issue of increasing operator complexity.
1 code implementation • 10 May 2023 • Qingyu Chen, Yan Hu, Xueqing Peng, Qianqian Xie, Qiao Jin, Aidan Gilson, Maxwell B. Singer, Xuguang Ai, Po-Ting Lai, Zhizheng Wang, Vipina Kuttichi Keloth, Kalpana Raja, Jiming Huang, Huan He, Fongci Lin, Jingcheng Du, Rui Zhang, W. Jim Zheng, Ron A. Adelman, Zhiyong Lu, Hua Xu
While Large Language Models (LLMs) have shown promise in general domains, their effectiveness in BioNLP tasks remains unclear due to limited benchmarks and practical guidelines.
1 code implementation • 26 Feb 2023 • Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, Marinka Zitnik
Deleted Edge Consistency ensures that the influence of deleted elements is removed from both model weights and neighboring representations, while Neighborhood Influence guarantees that the remaining model knowledge is preserved after deletion.
no code implementations • 8 Feb 2023 • Huan He, Shifan Zhao, Yuanzhe Xi, Joyce C Ho
Due to patient privacy protection concerns, machine learning research in healthcare has been undeniably slower and limited than in other application domains.
1 code implementation • 6 Feb 2023 • Huan He, Owen Queen, Teddy Koker, Consuelo Cuevas, Theodoros Tsiligkaridis, Marinka Zitnik
Additionally, the label distributions of tasks in the source and target domains can differ significantly, posing difficulties in addressing label shifts and recognizing labels unique to the target domain.
no code implementations • 22 Oct 2022 • Huan He, Shifan Zhao, Ziyuan Tang, Joyce C Ho, Yousef Saad, Yuanzhe Xi
Nonlinear acceleration methods are powerful techniques to speed up fixed-point iterations.
no code implementations • 19 Oct 2022 • Yingchun Guo, Huan He, Ye Zhu, Yang Yu
Domain generalization person re-identification (DG Re-ID) aims to directly deploy a model trained on the source domain to the unseen target domain with good generalization, which is a challenging problem and has practical value in a real-world deployment.
no code implementations • 5 Jun 2022 • Difeng Cai, Yuliang Ji, Huan He, Qiang Ye, Yuanzhe Xi
AUTM offers a versatile and efficient way to the design of normalizing flows with explicit inverse and unrestricted function classes or parameters.
no code implementations • 7 Jan 2022 • Sungrim Moon, Huan He, Hongfang Liu, Jungwei W. Fan
Specifically, the 1-to-N, M-to-1, and M-to-N drug-reason relations were included to form the multi-answer and multi-focus QA entries, which represent more complex and natural challenges in addition to the basic one-drug-one-reason cases.
no code implementations • 20 Oct 2021 • Sijia Liu, Andrew Wen, LiWei Wang, Huan He, Sunyang Fu, Robert Miller, Andrew Williams, Daniel Harris, Ramakanth Kavuluru, Mei Liu, Noor Abu-el-rub, Dalton Schutte, Rui Zhang, Masoud Rouhizadeh, John D. Osborne, Yongqun He, Umit Topaloglu, Stephanie S Hong, Joel H Saltz, Thomas Schaffter, Emily Pfaff, Christopher G. Chute, Tim Duong, Melissa A. Haendel, Rafael Fuentes, Peter Szolovits, Hua Xu, Hongfang Liu, Natural Language Processing, Subgroup, National COVID Cohort Collaborative
Although we use COVID-19 as a use case in this effort, our framework is general enough to be applied to other domains of interest in clinical NLP.
1 code implementation • 6 Oct 2021 • Huan He, Shifan Zhao, Yuanzhe Xi, Joyce C Ho, Yousef Saad
We also empirically show that GDA-AMsolves a variety of minimax problems and improves GAN training on several datasets
no code implementations • ICLR 2022 • Huan He, Shifan Zhao, Yuanzhe Xi, Joyce Ho, Yousef Saad
We also empirically show that GDA-AM solves a variety of minimax problems and improves GAN training on several datasets
no code implementations • 28 Sep 2021 • Lei Wang, Shihui Zhang, Huan He, Xiaoxiao Zhang, Yu Sang
Last but not least, the compact triplet-center loss is proposed specifically for the sketch recognition task.
no code implementations • 24 Oct 2019 • Sunyang Fu, David Chen, Huan He, Sijia Liu, Sungrim Moon, Kevin J Peterson, Feichen Shen, Li-Wei Wang, Yanshan Wang, Andrew Wen, Yiqing Zhao, Sunghwan Sohn, Hongfang Liu
Background Concept extraction, a subdomain of natural language processing (NLP) with a focus on extracting concepts of interest, has been adopted to computationally extract clinical information from text for a wide range of applications ranging from clinical decision support to care quality improvement.