no code implementations • ICML 2020 • Runxue Bao, Bin Gu, Heng Huang
Ordered Weight $L_{1}$-Norms (OWL) is a new family of regularizers for high-dimensional sparse regression.
no code implementations • 19 Dec 2024 • Lei Lu, Zhepeng Wang, Runxue Bao, Mengbing Wang, Fangyi Li, Yawen Wu, Weiwen Jiang, Jie Xu, Yanzhi Wang, Shangqian Gao
Therefore, such a combination of the pruning decisions and the finetuned weights may be suboptimal, leading to non-negligible performance degradation.
no code implementations • 9 Dec 2024 • Zhepeng Wang, Runxue Bao, Yawen Wu, Guodong Liu, Lei Yang, Liang Zhan, Feng Zheng, Weiwen Jiang, yanfu Zhang
Our approach conceptualizes domain knowledge as natural language and introduces a specialized multimodal GNN capable of leveraging this uncurated knowledge to guide the learning process of the GNN, such that it can improve the model performance and strengthen the interpretability of the predictions.
no code implementations • 31 Oct 2024 • Shuyang Yu, Runxue Bao, Parminder Bhatia, Taha Kass-Hout, Jiayu Zhou, Cao Xiao
Large language models (LLMs) can learn vast amounts of knowledge from diverse domains during pre-training.
no code implementations • 20 Sep 2024 • Yuhe Gao, Runxue Bao, Yuelyu Ji, Yiming Sun, Chenxi Song, Jeffrey P. Ferraro, Ye Ye
Large Language Models (LLMs) show significant potential of capturing the semantic meaning of clinical concepts and reducing heterogeneity.
no code implementations • 20 Sep 2024 • Zhepeng Wang, Runxue Bao, Yawen Wu, Jackson Taylor, Cao Xiao, Feng Zheng, Weiwen Jiang, Shangqian Gao, yanfu Zhang
Pretrained large language models (LLMs) have revolutionized natural language processing (NLP) tasks such as summarization, question answering, and translation.
no code implementations • 10 May 2024 • Nan Zhang, Yanchi Liu, Xujiang Zhao, Wei Cheng, Runxue Bao, Rui Zhang, Prasenjit Mitra, Haifeng Chen
Moreover, by efficiently approximating weight importance with the refined training loss on a domain-specific calibration dataset, we obtain a pruned model emphasizing generality and specificity.
1 code implementation • CVPR 2024 • Xidong Wu, Shangqian Gao, Zeyu Zhang, Zhenzhen Li, Runxue Bao, yanfu Zhang, Xiaoqian Wang, Heng Huang
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging.
no code implementations • 18 Feb 2024 • Fali Wang, Runxue Bao, Suhang Wang, Wenchao Yu, Yanchi Liu, Wei Cheng, Haifeng Chen
Large Language Models (LLMs) have achieved exceptional capabilities in open generation across various domains, yet they encounter difficulties with tasks that require intensive knowledge.
no code implementations • 3 Feb 2024 • Yiming Sun, Yuhe Gao, Runxue Bao, Gregory F. Cooper, Jessi Espino, Harry Hochheiser, Marian G. Michaels, John M. Aronis, Chenxi Song, Ye Ye
Transfer learning has become a pivotal technique in machine learning and has proven to be effective in various real-world applications.
1 code implementation • 12 Oct 2023 • Runxue Bao, Yiming Sun, Yuhe Gao, Jindong Wang, Qiang Yang, Zhi-Hong Mao, Ye Ye
In this paper, we offer an extensive review of over 60 HTL methods, covering both data-based and model-based approaches.
no code implementations • 29 Jun 2023 • Yuelyu Ji, Yuhe Gao, Runxue Bao, Qi Li, Disheng Liu, Yiming Sun, Ye Ye
Results showed that the Multi-DANN models outperformed the Single-DANN models and baseline models in predicting revisits of COVID-19 patients to the ER within 7 days after discharge.
no code implementations • 17 Aug 2022 • Jason Xiaotian Dou, Alvin Qingkai Pan, Runxue Bao, Haiyi Harry Mao, Lei Luo, Zhi-Hong Mao
Due to the growth of large datasets and model complexity, we want to learn and adapt the sampling process while training a representation.
no code implementations • 11 Aug 2022 • Runxue Bao, Bin Gu, Heng Huang
To address this challenge, we propose a novel accelerated doubly stochastic gradient descent (ADSGD) method for sparsity regularized loss minimization problems, which can reduce the number of block iterations by eliminating inactive coefficients during the optimization process and eventually achieve faster explicit model identification and improve the algorithm efficiency.
no code implementations • 23 Apr 2022 • Runxue Bao, Xidong Wu, Wenhan Xian, Heng Huang
To the best of our knowledge, this is the first work of distributed safe dynamic screening method.
1 code implementation • 29 Jun 2020 • Runxue Bao, Bin Gu, Heng Huang
Moreover, we prove that the algorithms with our screening rule are guaranteed to have identical results with the original algorithms.