Search Results for author: Zihao Wu

Found 58 papers, 7 papers with code

Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection

1 code implementation13 Nov 2019 Samuel W. Remedios, Zihao Wu, Camilo Bermudez, Cailey I. Kerley, Snehashis Roy, Mayur B. Patel, John A. Butman, Bennett A. Landman, Dzung L. Pham

Multiple instance learning (MIL) is a supervised learning methodology that aims to allow models to learn instance class labels from bag class labels, where a bag is defined to contain multiple instances.

Multiple Instance Learning

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

no code implementations20 May 2022 Yuzhong Chen, Zhenxiang Xiao, Lin Zhao, Lu Zhang, Haixing Dai, David Weizhong Liu, Zihao Wu, Changhe Li, Tuo Zhang, Changying Li, Dajiang Zhu, Tianming Liu, Xi Jiang

However, for data-intensive models such as vision transformer (ViT), current fine-tuning based FSL approaches are inefficient in knowledge generalization and thus degenerate the downstream task performances.

Active Learning Few-Shot Learning

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

no code implementations25 May 2022 Chong Ma, Lin Zhao, Yuzhong Chen, Lu Zhang, Zhenxiang Xiao, Haixing Dai, David Liu, Zihao Wu, Zhengliang Liu, Sheng Wang, Jiaxing Gao, Changhe Li, Xi Jiang, Tuo Zhang, Qian Wang, Dinggang Shen, Dajiang Zhu, Tianming Liu

To address this problem, we propose to infuse human experts' intelligence and domain knowledge into the training of deep neural networks.

Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations

no code implementations22 Jun 2022 Lin Zhao, Haixing Dai, Zihao Wu, Zhenxiang Xiao, Lu Zhang, David Weizhong Liu, Xintao Hu, Xi Jiang, Sheng Li, Dajiang Zhu, Tianming Liu

However, whether there exists semantic correlations/connections between the visual representations in ANNs and those in BNNs remains largely unexplored due to both the lack of an effective tool to link and couple two different domains, and the lack of a general and effective framework of representing the visual semantics in BNNs such as human functional brain networks (FBNs).

Image Classification Representation Learning

Is Multi-Task Learning an Upper Bound for Continual Learning?

no code implementations26 Oct 2022 Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri

Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting.

Continual Learning Multi-Task Learning +1

Disentangled Representation Learning

no code implementations21 Nov 2022 Xin Wang, Hong Chen, Si'ao Tang, Zihao Wu, Wenwu Zhu

Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form.

Representation Learning

Gyri vs. Sulci: Disentangling Brain Core-Periphery Functional Networks via Twin-Transformer

no code implementations31 Jan 2023 Xiaowei Yu, Lu Zhang, Haixing Dai, Lin Zhao, Yanjun Lyu, Zihao Wu, Tianming Liu, Dajiang Zhu

To solve this fundamental problem, we design a novel Twin-Transformer framework to unveil the unique functional roles of gyri and sulci as well as their relationship in the whole brain function.

Anatomy

DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4

1 code implementation20 Mar 2023 Zhengliang Liu, Yue Huang, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Yiwei Li, Peng Shu, Fang Zeng, Lichao Sun, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu, Xiang Li

The digitization of healthcare has facilitated the sharing and re-using of medical data but has also raised concerns about confidentiality and privacy.

Benchmarking De-identification +4

Coupling Artificial Neurons in BERT and Biological Neurons in the Human Brain

no code implementations27 Mar 2023 Xu Liu, Mengyue Zhou, Gaosheng Shi, Yu Du, Lin Zhao, Zihao Wu, David Liu, Tianming Liu, Xintao Hu

Linking computational natural language processing (NLP) models and neural responses to language in the human brain on the one hand facilitates the effort towards disentangling the neural representations underpinning language perception, on the other hand provides neurolinguistics evidence to evaluate and improve NLP models.

CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network

no code implementations27 Mar 2023 Lin Zhao, Haixing Dai, Zihao Wu, Dajiang Zhu, Tianming Liu

In this study, We explore a novel brain-inspired design principle based on the core-periphery property of the human brain network to guide the design of CNNs.

Neural Architecture Search

Core-Periphery Principle Guided Redesign of Self-Attention in Transformers

no code implementations27 Mar 2023 Xiaowei Yu, Lu Zhang, Haixing Dai, Yanjun Lyu, Lin Zhao, Zihao Wu, David Liu, Tianming Liu, Dajiang Zhu

Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques.

When Brain-inspired AI Meets AGI

no code implementations28 Mar 2023 Lin Zhao, Lu Zhang, Zihao Wu, Yuzhong Chen, Haixing Dai, Xiaowei Yu, Zhengliang Liu, Tuo Zhang, Xintao Hu, Xi Jiang, Xiang Li, Dajiang Zhu, Dinggang Shen, Tianming Liu

Artificial General Intelligence (AGI) has been a long-standing goal of humanity, with the aim of creating machines capable of performing any intellectual task that humans can do.

In-Context Learning

Summary of ChatGPT-Related Research and Perspective Towards the Future of Large Language Models

no code implementations4 Apr 2023 Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, Zihao Wu, Lin Zhao, Dajiang Zhu, Xiang Li, Ning Qiang, Dingang Shen, Tianming Liu, Bao Ge

This paper presents a comprehensive survey of ChatGPT-related (GPT-3. 5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains.

ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT

2 code implementations17 Apr 2023 Chong Ma, Zihao Wu, Jiaqi Wang, Shaochen Xu, Yaonai Wei, Zhengliang Liu, Xi Jiang, Lei Guo, Xiaoyan Cai, Shu Zhang, Tuo Zhang, Dajiang Zhu, Dinggang Shen, Tianming Liu, Xiang Li

The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section.

In-Context Learning

Exploring the Trade-Offs: Unified Large Language Models vs Local Fine-Tuned Models for Highly-Specific Radiology NLI Task

no code implementations18 Apr 2023 Zihao Wu, Lu Zhang, Chao Cao, Xiaowei Yu, Haixing Dai, Chong Ma, Zhengliang Liu, Lin Zhao, Gang Li, Wei Liu, Quanzheng Li, Dinggang Shen, Xiang Li, Dajiang Zhu, Tianming Liu

To this end, in this study, we evaluate the performance of ChatGPT/GPT-4 on a radiology NLI task and compare it to other models fine-tuned specifically on task-related data samples.

Specificity Task 2

ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT

no code implementations21 Apr 2023 Tianyang Zhong, Yaonai Wei, Li Yang, Zihao Wu, Zhengliang Liu, Xiaozheng Wei, Wenjun Li, Junjie Yao, Chong Ma, Xiang Li, Dajiang Zhu, Xi Jiang, Junwei Han, Dinggang Shen, Tianming Liu, Tuo Zhang

The proposed method uses the strengths of LLMs' understanding and logical reasoning to correct the incomplete logical facts for optimizing the performance of perceptual module, by summarizing and reorganizing reasoning rules represented in natural language format.

Decipherment Logical Reasoning

Differentiate ChatGPT-generated and Human-written Medical Texts

no code implementations23 Apr 2023 Wenxiong Liao, Zhengliang Liu, Haixing Dai, Shaochen Xu, Zihao Wu, Yiyang Zhang, Xiaoke Huang, Dajiang Zhu, Hongmin Cai, Tianming Liu, Xiang Li

We focus on analyzing the differences between medical texts written by human experts and generated by ChatGPT, and designing machine learning workflows to effectively detect and differentiate medical texts generated by ChatGPT.

Prompt Engineering for Healthcare: Methodologies and Applications

no code implementations28 Apr 2023 Jiaqi Wang, Enze Shi, Sigang Yu, Zihao Wu, Chong Ma, Haixing Dai, Qiushi Yang, Yanqing Kang, Jinru Wu, Huawen Hu, Chenxi Yue, Haiyang Zhang, Yiheng Liu, Yi Pan, Zhengliang Liu, Lichao Sun, Xiang Li, Bao Ge, Xi Jiang, Dajiang Zhu, Yixuan Yuan, Dinggang Shen, Tianming Liu, Shu Zhang

Prompt engineering is a critical technique in the field of natural language processing that involves designing and optimizing the prompts used to input information into models, aiming to enhance their performance on specific tasks.

Machine Translation Prompt Engineering +3

Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT

no code implementations29 Apr 2023 Zhenxiang Xiao, Yuzhong Chen, Lu Zhang, Junjie Yao, Zihao Wu, Xiaowei Yu, Yi Pan, Lin Zhao, Chong Ma, Xinyu Liu, Wei Liu, Xiang Li, Yixuan Yuan, Dinggang Shen, Dajiang Zhu, Tianming Liu, Xi Jiang

Prompts have been proven to play a crucial role in large language models, and in recent years, vision models have also been using prompts to improve scalability for multiple downstream tasks.

Image Classification

SAM for Poultry Science

no code implementations17 May 2023 Xiao Yang, Haixing Dai, Zihao Wu, Ramesh Bist, Sachin Subedi, Jin Sun, Guoyu Lu, Changying Li, Tianming Liu, Lilong Chai

This study aims to assess the zero-shot segmentation performance of SAM on representative chicken segmentation tasks, including part-based segmentation and the use of infrared thermal images, and to explore chicken-tracking tasks by using SAM as a segmentation tool.

Object Tracking Segmentation +2

Artificial General Intelligence for Medical Imaging

no code implementations8 Jun 2023 Xiang Li, Lu Zhang, Zihao Wu, Zhengliang Liu, Lin Zhao, Yixuan Yuan, Jun Liu, Gang Li, Dajiang Zhu, Pingkun Yan, Quanzheng Li, Wei Liu, Tianming Liu, Dinggang Shen

In this review, we explore the potential applications of Artificial General Intelligence (AGI) models in healthcare, focusing on foundational Large Language Models (LLMs), Large Vision Models, and Large Multimodal Models.

AD-AutoGPT: An Autonomous GPT for Alzheimer's Disease Infodemiology

no code implementations16 Jun 2023 Haixing Dai, Yiwei Li, Zhengliang Liu, Lin Zhao, Zihao Wu, Suhang Song, Ye Shen, Dajiang Zhu, Xiang Li, Sheng Li, Xiaobai Yao, Lu Shi, Quanzheng Li, Zhuo Chen, Donglan Zhang, Gengchen Mai, Tianming Liu

In this pioneering study, inspired by AutoGPT, the state-of-the-art open-source application based on the GPT-4 large language model, we develop a novel tool called AD-AutoGPT which can conduct data collection, processing, and analysis about complex health narratives of Alzheimer's Disease in an autonomous manner via users' textual prompts.

Language Modelling Large Language Model

Segment Anything Model (SAM) for Radiation Oncology

no code implementations20 Jun 2023 Lian Zhang, Zhengliang Liu, Lu Zhang, Zihao Wu, Xiaowei Yu, Jason Holmes, Hongying Feng, Haixing Dai, Xiang Li, Quanzheng Li, Dajiang Zhu, Tianming Liu, Wei Liu

Given that SAM, a model pre-trained purely on natural images, can handle the delineation of OARs from medical images with clinically acceptable accuracy, these results highlight SAM's robust generalization capabilities with consistent accuracy in automatic segmentation for radiotherapy.

Segmentation

Exploring New Frontiers in Agricultural NLP: Investigating the Potential of Large Language Models for Food Applications

no code implementations20 Jun 2023 Saed Rezayi, Zhengliang Liu, Zihao Wu, Chandra Dhakal, Bao Ge, Haixing Dai, Gengchen Mai, Ninghao Liu, Chen Zhen, Tianming Liu, Sheng Li

ChatGPT has shown to be a strong baseline in many NLP tasks, and we believe it has the potential to improve our model in the task of semantic matching and enhance our model's understanding of food-related concepts and relationships.

Language Modelling Nutrition

Review of Large Vision Models and Visual Prompt Engineering

no code implementations3 Jul 2023 Jiaqi Wang, Zhengliang Liu, Lin Zhao, Zihao Wu, Chong Ma, Sigang Yu, Haixing Dai, Qiushi Yang, Yiheng Liu, Songyao Zhang, Enze Shi, Yi Pan, Tuo Zhang, Dajiang Zhu, Xiang Li, Xi Jiang, Bao Ge, Yixuan Yuan, Dinggang Shen, Tianming Liu, Shu Zhang

This review aims to summarize the methods employed in the computer vision domain for large vision models and visual prompt engineering, exploring the latest advancements in visual prompt engineering.

Prompt Engineering

PharmacyGPT: The AI Pharmacist

no code implementations19 Jul 2023 Zhengliang Liu, Zihao Wu, Mengxuan Hu, Bokai Zhao, Lin Zhao, Tianyi Zhang, Haixing Dai, Xianyan Chen, Ye Shen, Sheng Li, Brian Murray, Tianming Liu, Andrea Sikora

In this study, we introduce PharmacyGPT, a novel framework to assess the capabilities of large language models (LLMs) such as ChatGPT and GPT-4 in emulating the role of clinical pharmacists.

CohortGPT: An Enhanced GPT for Participant Recruitment in Clinical Study

no code implementations21 Jul 2023 Zihan Guan, Zihao Wu, Zhengliang Liu, Dufan Wu, Hui Ren, Quanzheng Li, Xiang Li, Ninghao Liu

Participant recruitment based on unstructured medical texts such as clinical notes and radiology reports has been a challenging yet important task for the cohort establishment in clinical research.

Few-Shot Learning text-classification +1

Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges

no code implementations14 Sep 2023 Fei Dou, Jin Ye, Geng Yuan, Qin Lu, Wei Niu, Haijian Sun, Le Guan, Guoyu Lu, Gengchen Mai, Ninghao Liu, Jin Lu, Zhengliang Liu, Zihao Wu, Chenjiao Tan, Shaochen Xu, Xianqiao Wang, Guoming Li, Lilong Chai, Sheng Li, Jin Sun, Hongyue Sun, Yunli Shao, Changying Li, Tianming Liu, WenZhan Song

Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas.

Decision Making

DT/MARS-CycleGAN: Improved Object Detection for MARS Phenotyping Robot

no code implementations19 Oct 2023 David Liu, Zhengkun Li, Zihao Wu, Changying Li

This work specifically tackles the first challenge by proposing a novel Digital-Twin(DT)MARS-CycleGAN model for image augmentation to improve our Modular Agricultural Robotic System (MARS)'s crop object detection from complex and variable backgrounds.

Image Augmentation Object +2

Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities

no code implementations30 Oct 2023 Zhengliang Liu, Yiwei Li, Qian Cao, Junwen Chen, Tianze Yang, Zihao Wu, John Hale, John Gibbs, Khaled Rasheed, Ninghao Liu, Gengchen Mai, Tianming Liu

Recent advances in artificial general intelligence (AGI), particularly large language models and creative image generation systems have demonstrated impressive capabilities on diverse tasks spanning the arts and humanities.

Image Generation Marketing

Evaluating the Potential of Leading Large Language Models in Reasoning Biology Questions

no code implementations5 Nov 2023 Xinyu Gong, Jason Holmes, Yiwei Li, Zhengliang Liu, Qi Gan, Zihao Wu, Jianli Zhang, Yusong Zou, Yuxi Teng, Tian Jiang, Hongtu Zhu, Wei Liu, Tianming Liu, Yajun Yan

Recent advances in Large Language Models (LLMs) have presented new opportunities for integrating Artificial General Intelligence (AGI) into biological research and education.

Logical Reasoning Multiple-choice

Evaluating multiple large language models in pediatric ophthalmology

no code implementations7 Nov 2023 Jason Holmes, Rui Peng, Yiwei Li, Jinyu Hu, Zhengliang Liu, Zihao Wu, Huan Zhao, Xi Jiang, Wei Liu, Hong Wei, Jie Zou, Tianming Liu, Yi Shao

IMPORTANCE The response effectiveness of different large language models (LLMs) and various individuals, including medical students, graduate students, and practicing physicians, in pediatric ophthalmology consultations, has not been clearly established yet.

Multiple-choice

Evaluating Large Language Models in Ophthalmology

no code implementations7 Nov 2023 Jason Holmes, Shuyuan Ye, Yiwei Li, Shi-Nan Wu, Zhengliang Liu, Zihao Wu, Jinyu Hu, Huan Zhao, Xi Jiang, Wei Liu, Hong Wei, Jie Zou, Tianming Liu, Yi Shao

Methods: A 100-item ophthalmology single-choice test was administered to three different LLMs (GPT-3. 5, GPT-4, and PaLM2) and three different professional levels (medical undergraduates, medical masters, and attending physicians), respectively.

Decision Making

Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines

no code implementations8 Dec 2023 Hui Wang, Anh Dang, Zihao Wu, Son Mac

The advancements in Generative Artificial Intelligence (GenAI) technologies such as ChatGPT provide opportunities to enrich educational experiences, but also raise concerns about academic integrity if misused.

Multimodality of AI for Education: Towards Artificial General Intelligence

no code implementations10 Dec 2023 Gyeong-Geon Lee, Lehong Shi, Ehsan Latif, Yizhu Gao, Arne Bewersdorff, Matthew Nyaaba, Shuchen Guo, Zihao Wu, Zhengliang Liu, Hui Wang, Gengchen Mai, Tiaming Liu, Xiaoming Zhai

This paper presents a comprehensive examination of how multimodal artificial intelligence (AI) approaches are paving the way towards the realization of Artificial General Intelligence (AGI) in educational contexts.

Large Language Models for Robotics: Opportunities, Challenges, and Perspectives

no code implementations9 Jan 2024 Jiaqi Wang, Zihao Wu, Yiwei Li, Hanqi Jiang, Peng Shu, Enze Shi, Huawen Hu, Chong Ma, Yiheng Liu, Xuhui Wang, Yincheng Yao, Xuan Liu, Huaqin Zhao, Zhengliang Liu, Haixing Dai, Lin Zhao, Bao Ge, Xiang Li, Tianming Liu, Shu Zhang

Notably, in the realm of robot task planning, LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.

Robot Task Planning

Assessing Large Language Models in Mechanical Engineering Education: A Study on Mechanics-Focused Conceptual Understanding

no code implementations13 Jan 2024 Jie Tian, Jixin Hou, Zihao Wu, Peng Shu, Zhengliang Liu, Yujie Xiang, Beikang Gu, Nicholas Filla, Yiwei Li, Ning Liu, Xianyan Chen, Keke Tang, Tianming Liu, Xianqiao Wang

This study is a pioneering endeavor to investigate the capabilities of Large Language Models (LLMs) in addressing conceptual questions within the domain of mechanical engineering with a focus on mechanics.

Multiple-choice Prompt Engineering

Revolutionizing Finance with LLMs: An Overview of Applications and Insights

no code implementations22 Jan 2024 Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, Ninghao Liu, Tianming Liu

Additionally, we conducted holistic tests on multiple financial tasks through the combination of natural language instructions.

LLMs for Coding and Robotics Education

no code implementations9 Feb 2024 Peng Shu, Huaqin Zhao, Hanqi Jiang, Yiwei Li, Shaochen Xu, Yi Pan, Zihao Wu, Zhengliang Liu, Guoyu Lu, Le Guan, Gong Chen, Xianqiao Wang Tianming Liu

To teach young children how to code and compete in robot challenges, large language models are being utilized for robot code explanation, generation, and modification.

Code Generation Explanation Generation

Reasoning before Comparison: LLM-Enhanced Semantic Similarity Metrics for Domain Specialized Text Analysis

no code implementations17 Feb 2024 Shaochen Xu, Zihao Wu, Huaqin Zhao, Peng Shu, Zhengliang Liu, Wenxiong Liao, Sheng Li, Andrea Sikora, Tianming Liu, Xiang Li

In this study, we leverage LLM to enhance the semantic analysis and develop similarity metrics for texts, addressing the limitations of traditional unsupervised NLP metrics like ROUGE and BLEU.

Semantic Similarity Semantic Textual Similarity +1

Few-Shot Learning for Annotation-Efficient Nucleus Instance Segmentation

no code implementations26 Feb 2024 Yu Ming, Zihao Wu, Jie Yang, Danyi Li, Yuan Gao, Changxin Gao, Gui-Song Xia, Yuanqing Li, Li Liang, Jin-Gang Yu

In this paper, we propose to formulate annotation-efficient nucleus instance segmentation from the perspective of few-shot learning (FSL).

Few-Shot Learning Instance Segmentation +3

Eye-gaze Guided Multi-modal Alignment Framework for Radiology

1 code implementation19 Mar 2024 Chong Ma, Hanqi Jiang, WenTing Chen, Zihao Wu, Xiaowei Yu, Fang Zeng, Lei Guo, Dajiang Zhu, Tuo Zhang, Dinggang Shen, Tianming Liu, Xiang Li

Additionally, we explore the impact of varying amounts of eye-gaze data on model performance, highlighting the feasibility and utility of integrating this auxiliary data into multi-modal pre-training.

Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.