Search Results for author: Dajiang Zhu

Found 49 papers, 8 papers with code

Eye-gaze Guided Multi-modal Alignment Framework for Radiology

1 code implementation19 Mar 2024 Chong Ma, Hanqi Jiang, WenTing Chen, Zihao Wu, Xiaowei Yu, Fang Zeng, Lei Guo, Dajiang Zhu, Tuo Zhang, Dinggang Shen, Tianming Liu, Xiang Li

Additionally, we explore the impact of varying amounts of eye-gaze data on model performance, highlighting the feasibility and utility of integrating this auxiliary data into multi-modal pre-training.

Zero-Shot Learning

InterLUDE: Interactions between Labeled and Unlabeled Data to Enhance Semi-Supervised Learning

no code implementations15 Mar 2024 Zhe Huang, Xiaowei Yu, Dajiang Zhu, Michael C. Hughes

In this paper, we introduce InterLUDE, a new approach to enhance SSL made of two parts that each benefit from labeled-unlabeled interaction.

Image Classification Representation Learning

Robust Core-Periphery Constrained Transformer for Domain Adaptation

no code implementations25 Aug 2023 Xiaowei Yu, Dajiang Zhu, Tianming Liu

Unsupervised domain adaptation (UDA) aims to learn transferable representation across domains.

Unsupervised Domain Adaptation

Review of Large Vision Models and Visual Prompt Engineering

no code implementations3 Jul 2023 Jiaqi Wang, Zhengliang Liu, Lin Zhao, Zihao Wu, Chong Ma, Sigang Yu, Haixing Dai, Qiushi Yang, Yiheng Liu, Songyao Zhang, Enze Shi, Yi Pan, Tuo Zhang, Dajiang Zhu, Xiang Li, Xi Jiang, Bao Ge, Yixuan Yuan, Dinggang Shen, Tianming Liu, Shu Zhang

This review aims to summarize the methods employed in the computer vision domain for large vision models and visual prompt engineering, exploring the latest advancements in visual prompt engineering.

Prompt Engineering

Segment Anything Model (SAM) for Radiation Oncology

no code implementations20 Jun 2023 Lian Zhang, Zhengliang Liu, Lu Zhang, Zihao Wu, Xiaowei Yu, Jason Holmes, Hongying Feng, Haixing Dai, Xiang Li, Quanzheng Li, Dajiang Zhu, Tianming Liu, Wei Liu

Given that SAM, a model pre-trained purely on natural images, can handle the delineation of OARs from medical images with clinically acceptable accuracy, these results highlight SAM's robust generalization capabilities with consistent accuracy in automatic segmentation for radiotherapy.

Segmentation

AD-AutoGPT: An Autonomous GPT for Alzheimer's Disease Infodemiology

no code implementations16 Jun 2023 Haixing Dai, Yiwei Li, Zhengliang Liu, Lin Zhao, Zihao Wu, Suhang Song, Ye Shen, Dajiang Zhu, Xiang Li, Sheng Li, Xiaobai Yao, Lu Shi, Quanzheng Li, Zhuo Chen, Donglan Zhang, Gengchen Mai, Tianming Liu

In this pioneering study, inspired by AutoGPT, the state-of-the-art open-source application based on the GPT-4 large language model, we develop a novel tool called AD-AutoGPT which can conduct data collection, processing, and analysis about complex health narratives of Alzheimer's Disease in an autonomous manner via users' textual prompts.

Language Modelling Large Language Model

Artificial General Intelligence for Medical Imaging

no code implementations8 Jun 2023 Xiang Li, Lu Zhang, Zihao Wu, Zhengliang Liu, Lin Zhao, Yixuan Yuan, Jun Liu, Gang Li, Dajiang Zhu, Pingkun Yan, Quanzheng Li, Wei Liu, Tianming Liu, Dinggang Shen

In this review, we explore the potential applications of Artificial General Intelligence (AGI) models in healthcare, focusing on foundational Large Language Models (LLMs), Large Vision Models, and Large Multimodal Models.

Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT

no code implementations29 Apr 2023 Zhenxiang Xiao, Yuzhong Chen, Lu Zhang, Junjie Yao, Zihao Wu, Xiaowei Yu, Yi Pan, Lin Zhao, Chong Ma, Xinyu Liu, Wei Liu, Xiang Li, Yixuan Yuan, Dinggang Shen, Dajiang Zhu, Tianming Liu, Xi Jiang

Prompts have been proven to play a crucial role in large language models, and in recent years, vision models have also been using prompts to improve scalability for multiple downstream tasks.

Image Classification

Prompt Engineering for Healthcare: Methodologies and Applications

no code implementations28 Apr 2023 Jiaqi Wang, Enze Shi, Sigang Yu, Zihao Wu, Chong Ma, Haixing Dai, Qiushi Yang, Yanqing Kang, Jinru Wu, Huawen Hu, Chenxi Yue, Haiyang Zhang, Yiheng Liu, Yi Pan, Zhengliang Liu, Lichao Sun, Xiang Li, Bao Ge, Xi Jiang, Dajiang Zhu, Yixuan Yuan, Dinggang Shen, Tianming Liu, Shu Zhang

Prompt engineering is a critical technique in the field of natural language processing that involves designing and optimizing the prompts used to input information into models, aiming to enhance their performance on specific tasks.

Machine Translation Prompt Engineering +3

Differentiate ChatGPT-generated and Human-written Medical Texts

no code implementations23 Apr 2023 Wenxiong Liao, Zhengliang Liu, Haixing Dai, Shaochen Xu, Zihao Wu, Yiyang Zhang, Xiaoke Huang, Dajiang Zhu, Hongmin Cai, Tianming Liu, Xiang Li

We focus on analyzing the differences between medical texts written by human experts and generated by ChatGPT, and designing machine learning workflows to effectively detect and differentiate medical texts generated by ChatGPT.

ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT

no code implementations21 Apr 2023 Tianyang Zhong, Yaonai Wei, Li Yang, Zihao Wu, Zhengliang Liu, Xiaozheng Wei, Wenjun Li, Junjie Yao, Chong Ma, Xiang Li, Dajiang Zhu, Xi Jiang, Junwei Han, Dinggang Shen, Tianming Liu, Tuo Zhang

The proposed method uses the strengths of LLMs' understanding and logical reasoning to correct the incomplete logical facts for optimizing the performance of perceptual module, by summarizing and reorganizing reasoning rules represented in natural language format.

Decipherment Logical Reasoning

Exploring the Trade-Offs: Unified Large Language Models vs Local Fine-Tuned Models for Highly-Specific Radiology NLI Task

no code implementations18 Apr 2023 Zihao Wu, Lu Zhang, Chao Cao, Xiaowei Yu, Haixing Dai, Chong Ma, Zhengliang Liu, Lin Zhao, Gang Li, Wei Liu, Quanzheng Li, Dinggang Shen, Xiang Li, Dajiang Zhu, Tianming Liu

To this end, in this study, we evaluate the performance of ChatGPT/GPT-4 on a radiology NLI task and compare it to other models fine-tuned specifically on task-related data samples.

Specificity Task 2

ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT

2 code implementations17 Apr 2023 Chong Ma, Zihao Wu, Jiaqi Wang, Shaochen Xu, Yaonai Wei, Zhengliang Liu, Xi Jiang, Lei Guo, Xiaoyan Cai, Shu Zhang, Tuo Zhang, Dajiang Zhu, Dinggang Shen, Tianming Liu, Xiang Li

The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section.

In-Context Learning

AGI for Agriculture

no code implementations12 Apr 2023 Guoyu Lu, Sheng Li, Gengchen Mai, Jin Sun, Dajiang Zhu, Lilong Chai, Haijian Sun, Xianqiao Wang, Haixing Dai, Ninghao Liu, Rui Xu, Daniel Petti, Tianming Liu, Changying Li

Artificial General Intelligence (AGI) is poised to revolutionize a variety of sectors, including healthcare, finance, transportation, and education.

Decision Making Knowledge Graphs +1

Summary of ChatGPT-Related Research and Perspective Towards the Future of Large Language Models

no code implementations4 Apr 2023 Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, Zihao Wu, Lin Zhao, Dajiang Zhu, Xiang Li, Ning Qiang, Dingang Shen, Tianming Liu, Bao Ge

This paper presents a comprehensive survey of ChatGPT-related (GPT-3. 5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains.

When Brain-inspired AI Meets AGI

no code implementations28 Mar 2023 Lin Zhao, Lu Zhang, Zihao Wu, Yuzhong Chen, Haixing Dai, Xiaowei Yu, Zhengliang Liu, Tuo Zhang, Xintao Hu, Xi Jiang, Xiang Li, Dajiang Zhu, Dinggang Shen, Tianming Liu

Artificial General Intelligence (AGI) has been a long-standing goal of humanity, with the aim of creating machines capable of performing any intellectual task that humans can do.

In-Context Learning

CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network

no code implementations27 Mar 2023 Lin Zhao, Haixing Dai, Zihao Wu, Dajiang Zhu, Tianming Liu

In this study, We explore a novel brain-inspired design principle based on the core-periphery property of the human brain network to guide the design of CNNs.

Neural Architecture Search

Core-Periphery Principle Guided Redesign of Self-Attention in Transformers

no code implementations27 Mar 2023 Xiaowei Yu, Lu Zhang, Haixing Dai, Yanjun Lyu, Lin Zhao, Zihao Wu, David Liu, Tianming Liu, Dajiang Zhu

Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques.

DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4

1 code implementation20 Mar 2023 Zhengliang Liu, Yue Huang, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Yiwei Li, Peng Shu, Fang Zeng, Lichao Sun, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu, Xiang Li

The digitization of healthcare has facilitated the sharing and re-using of medical data but has also raised concerns about confidentiality and privacy.

Benchmarking De-identification +4

Gyri vs. Sulci: Disentangling Brain Core-Periphery Functional Networks via Twin-Transformer

no code implementations31 Jan 2023 Xiaowei Yu, Lu Zhang, Haixing Dai, Lin Zhao, Yanjun Lyu, Zihao Wu, Tianming Liu, Dajiang Zhu

To solve this fundamental problem, we design a novel Twin-Transformer framework to unveil the unique functional roles of gyri and sulci as well as their relationship in the whole brain function.

Anatomy

BI AVAN: Brain inspired Adversarial Visual Attention Network

no code implementations27 Oct 2022 Heng Huang, Lin Zhao, Xintao Hu, Haixing Dai, Lu Zhang, Dajiang Zhu, Tianming Liu

Visual attention is a fundamental mechanism in the human brain, and it inspires the design of attention mechanisms in deep neural networks.

Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations

no code implementations22 Jun 2022 Lin Zhao, Haixing Dai, Zihao Wu, Zhenxiang Xiao, Lu Zhang, David Weizhong Liu, Xintao Hu, Xi Jiang, Sheng Li, Dajiang Zhu, Tianming Liu

However, whether there exists semantic correlations/connections between the visual representations in ANNs and those in BNNs remains largely unexplored due to both the lack of an effective tool to link and couple two different domains, and the lack of a general and effective framework of representing the visual semantics in BNNs such as human functional brain networks (FBNs).

Image Classification Representation Learning

Rectify ViT Shortcut Learning by Visual Saliency

no code implementations17 Jun 2022 Chong Ma, Lin Zhao, Yuzhong Chen, David Weizhong Liu, Xi Jiang, Tuo Zhang, Xintao Hu, Dinggang Shen, Dajiang Zhu, Tianming Liu

In this work, we propose a novel and effective saliency-guided vision transformer (SGT) model to rectify shortcut learning in ViT with the absence of eye-gaze data.

Representing Brain Anatomical Regularity and Variability by Few-Shot Embedding

no code implementations26 May 2022 Lu Zhang, Xiaowei Yu, Yanjun Lyu, Zhengwang Wu, Haixing Dai, Lin Zhao, Li Wang, Gang Li, Tianming Liu, Dajiang Zhu

Our experimental results show that: 1) the learned embedding vectors can quantitatively encode the commonality and individuality of cortical folding patterns; 2) with the embeddings we can robustly infer the complicated many-to-many anatomical correspondences among different brains and 3) our model can be successfully transferred to new populations with very limited training samples.

Few-Shot Learning

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

no code implementations25 May 2022 Chong Ma, Lin Zhao, Yuzhong Chen, Lu Zhang, Zhenxiang Xiao, Haixing Dai, David Liu, Zihao Wu, Zhengliang Liu, Sheng Wang, Jiaxing Gao, Changhe Li, Xi Jiang, Tuo Zhang, Qian Wang, Dinggang Shen, Dajiang Zhu, Tianming Liu

To address this problem, we propose to infuse human experts' intelligence and domain knowledge into the training of deep neural networks.

Brain Cortical Functional Gradients Predict Cortical Folding Patterns via Attention Mesh Convolution

no code implementations21 May 2022 Li Yang, Zhibin He, Changhe Li, Junwei Han, Dajiang Zhu, Tianming Liu, Tuo Zhang

The convolution on mesh considers the spatial organization of functional gradients and folding patterns on a cortical sheet and the newly designed channel attention block enhances the interpretability of the contribution of different functional gradients to cortical folding prediction.

Anatomy

A Unified and Biologically-Plausible Relational Graph Representation of Vision Transformers

no code implementations20 May 2022 Yuzhong Chen, Yu Du, Zhenxiang Xiao, Lin Zhao, Lu Zhang, David Weizhong Liu, Dajiang Zhu, Tuo Zhang, Xintao Hu, Tianming Liu, Xi Jiang

The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs).

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

no code implementations20 May 2022 Yuzhong Chen, Zhenxiang Xiao, Lin Zhao, Lu Zhang, Haixing Dai, David Weizhong Liu, Zihao Wu, Changhe Li, Tuo Zhang, Changying Li, Dajiang Zhu, Tianming Liu, Xi Jiang

However, for data-intensive models such as vision transformer (ViT), current fine-tuning based FSL approaches are inefficient in knowledge generalization and thus degenerate the downstream task performances.

Active Learning Few-Shot Learning

Discovering Dynamic Functional Brain Networks via Spatial and Channel-wise Attention

1 code implementation19 May 2022 Yiheng Liu, Enjie Ge, Mengshen He, Zhengliang Liu, Shijie Zhao, Xintao Hu, Dajiang Zhu, Tianming Liu, Bao Ge

More importantly, our proposed hybrid attention modules (SA and CA) do not enforce assumptions of linearity and independence as previous methods, and thus provide a novel approach to better understanding dynamic functional brain networks.

Disentangling Spatial-Temporal Functional Brain Networks via Twin-Transformers

no code implementations20 Apr 2022 Xiaowei Yu, Lu Zhang, Lin Zhao, Yanjun Lyu, Tianming Liu, Dajiang Zhu

In this work, we propose a novel Twin-Transformers framework to simultaneously infer common and individual functional networks in both spatial and temporal space, in a self-supervised manner.

Representative Functional Connectivity Learning for Multiple Clinical groups in Alzheimer's Disease

no code implementations14 Jun 2021 Lu Zhang, Xiaowei Yu, Yanjun Lyu, Li Wang, Dajiang Zhu

By mapping the learned clinical group related feature vectors to the original FC space, representative FCs were constructed for each group.

Multi-class Classification

Disease2Vec: Representing Alzheimer's Progression via Disease Embedding Tree

no code implementations13 Feb 2021 Lu Zhang, Li Wang, Tianming Liu, Dajiang Zhu

By disease em-bedding, the framework generates a disease embedding tree (DETree) which effectively represents different clinical stages as a tree trajectory reflecting AD progression and thus can be used to predict clinical status by projecting individuals onto this continuous trajectory.

Multi-class Classification

Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

no code implementations NeurIPS 2010 Kaiming Li, Lei Guo, Carlos Faraco, Dajiang Zhu, Fan Deng, Tuo Zhang, Xi Jiang, Degang Zhang, Hanbo Chen, Xintao Hu, Steve Miller, Tianming Liu

Our strategy is to formulate the individual ROI optimization as a group variance minimization problem, in which group-wise functional and structural connectivity patterns, and anatomic profiles are defined as optimization constraints.

Cannot find the paper you are looking for? You can Submit a new open access paper.