no code implementations • 17 Jan 2025 • Jianhui Sun, Xidong Wu, Heng Huang, Aidong Zhang
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
no code implementations • 16 Jan 2025 • Sanchit Sinha, Guangzhi Xiong, Aidong Zhang
Vision Transformers (ViTs) are increasingly being adopted in various sensitive vision applications - like medical diagnosis, facial recognition, etc.
no code implementations • 20 Nov 2024 • Yifan Yang, Qiao Jin, Robert Leaman, Xiaoyu Liu, Guangzhi Xiong, Maame Sarfo-Gyamfi, Changlin Gong, Santiago Ferrière-Steinert, W. John Wilbur, Xiaojun Li, Jiaxin Yuan, Bang An, Kelvin S. Castro, Francisco Erramuspe Álvarez, Matías Stockle, Aidong Zhang, Furong Huang, Zhiyong Lu
The remarkable capabilities of Large Language Models (LLMs) make them increasingly compelling for adoption in real-world healthcare applications.
no code implementations • 8 Nov 2024 • Nicholas Wan, Qiao Jin, Joey Chan, Guangzhi Xiong, Serina Applebaum, Aidan Gilson, Reid McMurry, R. Andrew Taylor, Aidong Zhang, Qingyu Chen, Zhiyong Lu
Although large language models (LLMs) have been assessed for general medical knowledge using medical licensing exams, their ability to effectively support clinical decision-making tasks, such as selecting and using medical calculators, remains uncertain.
no code implementations • 4 Nov 2024 • Guangzhi Xiong, Eric Xie, Amir Hassan Shariatmadari, Sikun Guo, Stefan Bekiranov, Aidong Zhang
To overcome these challenges, we propose KG-CoI (Knowledge Grounded Chain of Ideas), a novel system that enhances LLM hypothesis generation by integrating external, structured knowledge from knowledge graphs (KGs).
no code implementations • 31 Oct 2024 • Sikun Guo, Amir Hassan Shariatmadari, Guangzhi Xiong, Albert Huang, Eric Xie, Stefan Bekiranov, Aidong Zhang
To address this gap, we propose IdeaBench, a benchmark system that includes a comprehensive dataset and an evaluation framework for standardizing the assessment of research idea generation using LLMs.
1 code implementation • 24 Oct 2024 • Qiao Jin, Nicholas Wan, Robert Leaman, Shubo Tian, Zhizheng Wang, Yifan Yang, Zifeng Wang, Guangzhi Xiong, Po-Ting Lai, Qingqing Zhu, Benjamin Hou, Maame Sarfo-Gyamfi, Gongbo Zhang, Aidan Gilson, Balu Bhasuran, Zhe He, Aidong Zhang, Jimeng Sun, Chunhua Weng, Ronald M. Summers, Qingyu Chen, Yifan Peng, Zhiyong Lu
We then review the strategies, such as prompt engineering and fine-tuning, to adapt standard LLMs to specialized medical tasks.
no code implementations • 20 Oct 2024 • Sanchit Sinha, Guangzhi Xiong, Aidong Zhang
Our method assumes generative factors and concepts to form a bipartite graph, with directed causal edges from generative factors to concepts.
1 code implementation • 7 Oct 2024 • Guangzhi Xiong, Sanchit Sinha, Aidong Zhang
In this work, we propose a new deep tabular learning method, termed Prototypical Neural Additive Model (ProtoNAM), which introduces prototypes into neural networks in the framework of GAMs.
1 code implementation • 4 Sep 2024 • Guangtao Zheng, Wenqian Ye, Aidong Zhang
In this paper, we propose a systematic and rigorous benchmark framework, termed FewSTAB, to fairly demonstrate and quantify varied degrees of robustness of few-shot classifiers to spurious bias.
1 code implementation • 1 Aug 2024 • Guangzhi Xiong, Qiao Jin, Xiao Wang, Minjia Zhang, Zhiyong Lu, Aidong Zhang
The emergent abilities of large language models (LLMs) have demonstrated great potential in solving medical questions.
no code implementations • 27 Jul 2024 • Sanchit Sinha, Guangzhi Xiong, Aidong Zhang
Interpretability of Deep Neural Networks using concept-based models offers a promising way to explain model behavior through human-understandable concepts.
no code implementations • 24 Jun 2024 • Wenqian Ye, Guangtao Zheng, Yunsheng Ma, Xu Cao, Bolin Lai, James M. Rehg, Aidong Zhang
Our findings illuminate the persistence of the reliance on spurious correlations from these models and underscore the urge for new methodologies to mitigate spurious biases.
1 code implementation • 17 Jun 2024 • Nikhil Khandekar, Qiao Jin, Guangzhi Xiong, Soren Dunn, Serina S Applebaum, Zain Anwar, Maame Sarfo-Gyamfi, Conrad W Safranek, Abid A Anwar, Andrew Zhang, Aidan Gilson, Maxwell B Singer, Amisha Dave, Andrew Taylor, Aidong Zhang, Qingyu Chen, Zhiyong Lu
To this end, we propose MedCalc-Bench, a first-of-its-kind dataset focused on evaluating the medical calculation capability of LLMs.
1 code implementation • 15 Jun 2024 • Guangtao Zheng, Wenqian Ye, Aidong Zhang
In this paper, we propose a novel learning framework based on meta-learning, termed SPUME -- SPUriousness-aware MEta-learning, to train an image classifier to be robust to spurious correlations.
no code implementations • 19 May 2024 • Sanchit Sinha, Yuguang Yue, Victor Soto, Mayank Kulkarni, Jianhua Lu, Aidong Zhang
In this paper, we propose MAML-en-LLM, a novel method for meta-training LLMs, which can learn truly generalizable parameters that not only perform well on disjointed tasks but also adapts to unseen tasks.
1 code implementation • 6 May 2024 • Guangtao Zheng, Wenqian Ye, Aidong Zhang
The fine-grained training labels are formulated with different prediction behaviors of the classifier identified in a novel spuriousness embedding space.
1 code implementation • 1 May 2024 • Sanchit Sinha, Guangzhi Xiong, Aidong Zhang
With the wide proliferation of Deep Neural Networks in high-stake applications, there is a growing demand for explainability behind their decision-making process.
1 code implementation • 20 Feb 2024 • Wenqian Ye, Guangtao Zheng, Xu Cao, Yunsheng Ma, Aidong Zhang
Machine learning systems are known to be sensitive to spurious correlations between non-essential features of the inputs (e. g., background, texture, and secondary objects) and the corresponding labels.
2 code implementations • 20 Feb 2024 • Guangzhi Xiong, Qiao Jin, Zhiyong Lu, Aidong Zhang
However, a RAG system can involve multiple flexible components, and there is a lack of best practices regarding the optimal RAG setting for various medical purposes.
1 code implementation • 20 Dec 2023 • Guangtao Zheng, Mengdi Huai, Aidong Zhang
Then, we propose Adversarial learning with Semantics Transformations (AdvST) that augments the source domain data with semantics transformations and learns a robust model with the augmented data.
no code implementations • 19 Dec 2023 • Jianhui Sun, Xidong Wu, Heng Huang, Aidong Zhang
To our best knowledge, this is the first work that thoroughly analyzes the performances of server momentum with a hyperparameter scheduler and system heterogeneity.
no code implementations • 1 Nov 2023 • Jiayi Chen, Hanjun Dai, Bo Dai, Aidong Zhang, Wei Wei
However, prior works for Few-shot VDER mainly address the problem at the document level with a predefined global entity space, which doesn't account for the entity-level few-shot scenario: target entity types are locally personalized by each task and entity occurrences vary significantly among documents.
1 code implementation • NeurIPS 2023 • Xidong Wu, Jianhui Sun, Zhengmian Hu, Aidong Zhang, Heng Huang
We propose FL algorithms (FedSGDA+ and FedSGDA-M) and reduce existing complexity results for the most common minimax problems.
no code implementations • 17 Jul 2023 • Jing Ma, Ruocheng Guo, Aidong Zhang, Jundong Li
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
1 code implementation • 5 Jun 2023 • Jianhui Sun, Sanchit Sinha, Aidong Zhang
We approximate the dynamic of PGD-AT by a continuous-time Stochastic Differential Equation (SDE), and show that the diffusion term of this SDE determines the robust generalization.
no code implementations • 29 Nov 2022 • Sanchit Sinha, Mengdi Huai, Jianhui Sun, Aidong Zhang
Subsequently, we propose a potential general adversarial training-based defense mechanism to increase robustness of these systems to the proposed malicious attacks.
no code implementations • 16 Oct 2022 • Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?".
3 code implementations • Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2022 • Guangxu Xun, Kishlay Jha, Jianhui Sun, Aidong Zhang
This paper develops the Correlation Networks (CorNet) architecture for the extreme multi-label text classification (XMTC) task, where the objective is to tag an input text sequence with the most relevant subset of labels from an extremely large label set.
Multi Label Text Classification
Multi-Label Text Classification
+2
1 code implementation • 10 Jan 2022 • Jing Ma, Ruocheng Guo, Mengting Wan, Longqi Yang, Aidong Zhang, Jundong Li
In this framework, we generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
no code implementations • 17 May 2021 • Jiayi Chen, Aidong Zhang
To deal with task heterogeneity and promote fast within-task adaptions for each type of tasks, in this paper, we propose HetMAML, a task-heterogeneous model-agnostic meta-learning framework, which can capture both the type-specific and globally shared knowledge and can achieve the balance between knowledge customization and generalization.
1 code implementation • 5 Feb 2020 • Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, Aidong Zhang
Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up.
no code implementations • 3 Jun 2019 • Tianle Ma, Aidong Zhang
To address this challenge, we developed the Factor Graph Neural Network model that is interpretable and predictable by combining probabilistic graphical models with deep learning.
1 code implementation • NeurIPS 2018 • Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, Aidong Zhang
Estimating individual treatment effect (ITE) is a challenging problem in causal inference, due to the missing counterfactuals and the selection bias.
no code implementations • 6 Sep 2018 • Tianle Ma, Aidong Zhang
Our framework employs deep representation learning to learn feature embeddings and patient embeddings simultaneously, enabling us to integrate feature interaction network and patient view similarity network constraints into the training objective.
1 code implementation • 22 May 2018 • Tianle Ma, Aidong Zhang
The kNN attention pooling layer is a generalization of the Graph Attention Model (GAM), and can be applied to not only graphs but also any set of objects regardless of whether a graph is given or not.