no code implementations • EMNLP 2021 • Weijiang Yu, Yingpeng Wen, Fudan Zheng, Nong Xiao
Firstly, our pre-trained knowledge encoder aims at reasoning the MWP by using outside knowledge from the pre-trained transformer-based models.
1 code implementation • 26 Jun 2024 • Tianyu Lin, Zhiguang Chen, Zhonghao Yan, Weijiang Yu, Fudan Zheng
Diffusion models have demonstrated their effectiveness across various generative tasks.
no code implementations • 6 Feb 2024 • Fudan Zheng, Jindong Cao, Weijiang Yu, Zhiguang Chen, Nong Xiao, Yutong Lu
The weakly supervised prompt learning model only utilizes the classes of images in the dataset to guide the learning of the specific class vector in the prompt, while the learning of other context vectors in the prompt requires no manual annotations for guidance.
no code implementations • 6 Feb 2024 • Fudan Zheng, Mengfei Li, Ying Wang, Weijiang Yu, Ruixuan Wang, Zhiguang Chen, Nong Xiao, Yutong Lu
Given the above limitation in feature extraction, we propose a Globally-intensive Attention (GIA) module in the medical image encoder to simulate and integrate multi-view vision perception.
no code implementations • 26 Dec 2023 • Yingpeng Wen, Weijiang Yu, Fudan Zheng, Dan Huang, Nong Xiao
Additionally, the proposed AdaNAS model is compared with other neural architecture search methods and previous studies.