no code implementations • COLING 2022 • Xiao Song, Xiaodan Zhang, Junzhong Ji, Ying Liu, Pengxu Wei
Medical report automatic generation has gained increasing interest recently as a way to help radiologists write reports more efficiently.
1 code implementation • 22 Mar 2025 • Xiaodan Zhang, Yanzhao Shi, Junzhong Ji, Chengxin Zheng, Liangqiong Qu
By introducing the visual embedding and the learning status of medical entities as enriched clues, our method prompts the LLM to balance the learning of diverse entities, thereby enhancing reports with comprehensive findings.
1 code implementation • 10 Dec 2024 • Pengxin Guo, Shuang Zeng, WenHao Chen, Xiaodan Zhang, Weihong Ren, Yuyin Zhou, Liangqiong Qu
By revisiting the key to privacy exposure in FL under GIA, which lies in the frequent sharing of model gradients that contain private data, we take a new perspective by designing a novel privacy preserve FL framework that effectively ``breaks the direct connection'' between the shared parameters and the local private data to defend against GIA.
no code implementations • 4 Dec 2024 • Jiahua Xiao, Jiawei Zhang, Dongqing Zou, Xiaodan Zhang, Jimmy Ren, Xing Wei
In practice, inspired by the fact that image super-resolution and segmentation can benefit each other, we propose SegSR which introduces a dual-diffusion framework to facilitate interaction between the image super-resolution and segmentation diffusion models.
1 code implementation • 29 Sep 2024 • Chengxin Zheng, Junzhong Ji, Yanzhao Shi, Xiaodan Zhang, Liangqiong Qu
2) Shifted semantic representing: Limited medical corpus causes difficulties for models to transfer the learned textual representations to generative layers.
no code implementations • 26 May 2024 • Jiankun Wang, Sumyeong Ahn, Taykhoom Dalal, Xiaodan Zhang, Weishen Pan, Qiannan Zhang, Bin Chen, Hiroko H. Dodge, Fei Wang, Jiayu Zhou
Specifically, we develop a collaborative pipeline that combines SLs and LLMs via a confidence-driven decision-making mechanism, leveraging the strengths of SLs in clear-cut cases and LLMs in more complex scenarios.
no code implementations • 6 May 2024 • Shang Shang, Xinqiang Zhao, Zhongjiang Yao, Yepeng Yao, Liya Su, Zijing Fan, Xiaodan Zhang, Zhengwei Jiang
To demonstrate and address the underlying maliciousness, we propose a theoretical hypothesis and analytical approach, and introduce a new black-box jailbreak attack methodology named IntentObfuscator, exploiting this identified flaw by obfuscating the true intentions behind user prompts. This approach compels LLMs to inadvertently generate restricted content, bypassing their built-in content security measures.
no code implementations • 19 Dec 2023 • Xiaodan Zhang, Sandeep Vemulapalli, Nabasmita Talukdar, Sumyeong Ahn, Jiankun Wang, Han Meng, Sardar Mehtab Bin Murtaza, Aakash Ajay Dave, Dmitry Leshchiner, Dimitri F. Joseph, Martin Witteveen-Lane, Dave Chesla, Jiayu Zhou, Bin Chen
This study assesses the ability of state-of-the-art large language models (LLMs) including GPT-3. 5, GPT-4, Falcon, and LLaMA 2 to identify patients with mild cognitive impairment (MCI) from discharge summaries and examines instances where the models' responses were misaligned with their reasoning.
1 code implementation • 23 Nov 2023 • Bingkang Shi, Xiaodan Zhang, Dehan Kong, Yulei Wu, Zongzhen Liu, Honglei Lyu, Longtao Huang
The social biases and unwelcome stereotypes revealed by pretrained language models are becoming obstacles to their application.
no code implementations • 20 Aug 2023 • Yiming Huang, Aozhe Jia, Xiaodan Zhang, Jiawei Zhang
In this paper, we propose a weighted relevancy strategy, which takes the importance of token values into consideration, to reduce distortion when equally accumulating relevance.
no code implementations • 27 Oct 2021 • Xiaowei Yuan, Jingyuan Hu, Xiaodan Zhang, Honglei Lv
Based on EmoGraph2vec model, we design a novel neural network to incorporate text and emoji information into sentiment analysis, which uses a hybrid-attention module combined with TextCNN-based classifier to improve performance.
no code implementations • 27 Oct 2021 • Xiaowei Yuan, Jingyuan Hu, Xiaodan Zhang, Honglei Lv, Hao liu
In this paper, we propose an emoji-based co-attention network that learns the mutual emotional semantics between text and emojis on microblogs.
no code implementations • 28 Jul 2021 • Wei Zhou, Xin Cao, Xiaodan Zhang, Xingxing Hao, Dekui Wang, Ying He
Extensive experiments on benchmark datasets such as ShapeNet Part, S3DIS and KITTI for various tasks show that MPVConv improves the accuracy of the backbone (PointNet) by up to \textbf{36\%}, and achieves higher accuracy than the voxel-based model with up to \textbf{34}$\times$ speedups.
no code implementations • 30 Apr 2021 • Wei Zhou, Xin Cao, Xiaodan Zhang, Xingxing Hao, Dekui Wang, Ying He
Extensive experiments on benchmark datasets such as ShapeNet Part, S3DIS and KITTI for various tasks show that MVPConv improves the accuracy of the backbone (PointNet) by up to 36%, and achieves higher accuracy than the voxel-based model with up to 34 times speedup.
1 code implementation • 16 Jul 2020 • Lingwei Wei, Dou Hu, Wei Zhou, Xuehai Tang, Xiaodan Zhang, Xin Wang, Jizhong Han, Songlin Hu
Furthermore, we design a Sentiment-based Rethinking mechanism (SR) by refining the HIN with sentiment label information to learn a more sentiment-aware document representation.
no code implementations • 9 Jun 2020 • Chunyuan Yuan, Jiacheng Li, Wei Zhou, Yijun Lu, Xiaodan Zhang, Songlin Hu
For one thing, previous works cannot jointly utilize both the social network and diffusion graph for prediction, which is insufficient to model the complexity of the diffusion process and results in unsatisfactory prediction performance.
no code implementations • 19 Dec 2018 • Xiaodan Zhang, Xinbo Gao, Wen Lu, Lihuo He
The former aims to mimic the functions of peripheral vision to encode the holistic information and provide the attended regions.
no code implementations • ICCV 2017 • Shengfeng He, Jianbo Jiao, Xiaodan Zhang, Guoqiang Han, Rynson W. H. Lau
Experiments show that the proposed multi-task network outperforms existing multi-task architectures, and the auxiliary subitizing network provides strong guidance to salient object detection by reducing false positives and producing coherent saliency maps.