1 code implementation • 20 Mar 2023 • Zhengliang Liu, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu, Xiang Li
The digitization of healthcare has facilitated the sharing and re-using of medical data but has also raised concerns about confidentiality and privacy.
no code implementations • 25 Feb 2023 • Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, Hongmin Cai, Lichao Sun, Quanzheng Li, Dinggang Shen, Tianming Liu, Xiang Li
Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks.
no code implementations • 21 Feb 2023 • Wenxiong Liao, Zhengliang Liu, Haixing Dai, Zihao Wu, Yiyang Zhang, Xiaoke Huang, Yuzhong Chen, Xi Jiang, Wei Liu, Dajiang Zhu, Tianming Liu, Sheng Li, Xiang Li, Hongmin Cai
The main challenge of FSL is the difficulty of training robust models on small amounts of samples, which frequently leads to overfitting.
no code implementations • 31 Jan 2023 • Xiaowei Yu, Lu Zhang, Haixing Dai, Lin Zhao, Yanjun Lyu, Zihao Wu, Tianming Liu, Dajiang Zhu
To solve this fundamental problem, we design a novel Twin-Transformer framework to unveil the unique functional roles of gyri and sulci as well as their relationship in the whole brain function.
no code implementations • 21 Nov 2022 • Xin Wang, Hong Chen, Si'ao Tang, Zihao Wu, Wenwu Zhu
Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form.
no code implementations • 5 Nov 2022 • Hongmin Cai, Wenxiong Liao, Zhengliang Liu, Yiyang Zhang, Xiaoke Huang, Siqi Ding, Hui Ren, Zihao Wu, Haixing Dai, Sheng Li, Lingfei Wu, Ninghao Liu, Quanzheng Li, Tianming Liu, Xiang Li
In this framework, we apply distant-supervision on cross-domain knowledge graph adaptation.
no code implementations • 26 Oct 2022 • Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri
Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting.
no code implementations • 22 Jun 2022 • Lin Zhao, Haixing Dai, Zihao Wu, Zhenxiang Xiao, Lu Zhang, David Weizhong Liu, Xintao Hu, Xi Jiang, Sheng Li, Dajiang Zhu, Tianming Liu
However, whether there exists semantic correlations/connections between the visual representations in ANNs and those in BNNs remains largely unexplored due to both the lack of an effective tool to link and couple two different domains, and the lack of a general and effective framework of representing the visual semantics in BNNs such as human functional brain networks (FBNs).
no code implementations • 25 May 2022 • Chong Ma, Lin Zhao, Yuzhong Chen, Lu Zhang, Zhenxiang Xiao, Haixing Dai, David Liu, Zihao Wu, Zhengliang Liu, Sheng Wang, Jiaxing Gao, Changhe Li, Xi Jiang, Tuo Zhang, Qian Wang, Dinggang Shen, Dajiang Zhu, Tianming Liu
To address this problem, we propose to infuse human experts' intelligence and domain knowledge into the training of deep neural networks.
no code implementations • 20 May 2022 • Yuzhong Chen, Zhenxiang Xiao, Lin Zhao, Lu Zhang, Haixing Dai, David Weizhong Liu, Zihao Wu, Changhe Li, Tuo Zhang, Changying Li, Dajiang Zhu, Tianming Liu, Xi Jiang
However, for data-intensive models such as vision transformer (ViT), current fine-tuning based FSL approaches are inefficient in knowledge generalization and thus degenerate the downstream task performances.
no code implementations • Elsevier Applied Energy 2020 • Peng Kou, Deliang Liang, Chen Wang, Zihao Wu, Lin Gaoa
In this scheme, the optimal voltage control problem is formulated as a constrained Markov decision process, in which both state and action spaces are continuous.
1 code implementation • 13 Nov 2019 • Samuel W. Remedios, Zihao Wu, Camilo Bermudez, Cailey I. Kerley, Snehashis Roy, Mayur B. Patel, John A. Butman, Bennett A. Landman, Dzung L. Pham
Multiple instance learning (MIL) is a supervised learning methodology that aims to allow models to learn instance class labels from bag class labels, where a bag is defined to contain multiple instances.