Search Results for author: Zihao Wu

Found 12 papers, 2 papers with code

DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4

1 code implementation20 Mar 2023 Zhengliang Liu, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu, Xiang Li

The digitization of healthcare has facilitated the sharing and re-using of medical data but has also raised concerns about confidentiality and privacy.

Gyri vs. Sulci: Disentangling Brain Core-Periphery Functional Networks via Twin-Transformer

no code implementations31 Jan 2023 Xiaowei Yu, Lu Zhang, Haixing Dai, Lin Zhao, Yanjun Lyu, Zihao Wu, Tianming Liu, Dajiang Zhu

To solve this fundamental problem, we design a novel Twin-Transformer framework to unveil the unique functional roles of gyri and sulci as well as their relationship in the whole brain function.

Anatomy

Disentangled Representation Learning

no code implementations21 Nov 2022 Xin Wang, Hong Chen, Si'ao Tang, Zihao Wu, Wenwu Zhu

Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form.

Representation Learning

Is Multi-Task Learning an Upper Bound for Continual Learning?

no code implementations26 Oct 2022 Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri

Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting.

Continual Learning Multi-Task Learning +1

Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations

no code implementations22 Jun 2022 Lin Zhao, Haixing Dai, Zihao Wu, Zhenxiang Xiao, Lu Zhang, David Weizhong Liu, Xintao Hu, Xi Jiang, Sheng Li, Dajiang Zhu, Tianming Liu

However, whether there exists semantic correlations/connections between the visual representations in ANNs and those in BNNs remains largely unexplored due to both the lack of an effective tool to link and couple two different domains, and the lack of a general and effective framework of representing the visual semantics in BNNs such as human functional brain networks (FBNs).

Image Classification Representation Learning

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

no code implementations25 May 2022 Chong Ma, Lin Zhao, Yuzhong Chen, Lu Zhang, Zhenxiang Xiao, Haixing Dai, David Liu, Zihao Wu, Zhengliang Liu, Sheng Wang, Jiaxing Gao, Changhe Li, Xi Jiang, Tuo Zhang, Qian Wang, Dinggang Shen, Dajiang Zhu, Tianming Liu

To address this problem, we propose to infuse human experts' intelligence and domain knowledge into the training of deep neural networks.

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

no code implementations20 May 2022 Yuzhong Chen, Zhenxiang Xiao, Lin Zhao, Lu Zhang, Haixing Dai, David Weizhong Liu, Zihao Wu, Changhe Li, Tuo Zhang, Changying Li, Dajiang Zhu, Tianming Liu, Xi Jiang

However, for data-intensive models such as vision transformer (ViT), current fine-tuning based FSL approaches are inefficient in knowledge generalization and thus degenerate the downstream task performances.

Active Learning Few-Shot Learning

Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection

1 code implementation13 Nov 2019 Samuel W. Remedios, Zihao Wu, Camilo Bermudez, Cailey I. Kerley, Snehashis Roy, Mayur B. Patel, John A. Butman, Bennett A. Landman, Dzung L. Pham

Multiple instance learning (MIL) is a supervised learning methodology that aims to allow models to learn instance class labels from bag class labels, where a bag is defined to contain multiple instances.

Multiple Instance Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.