no code implementations • 4 Jan 2025 • Kangyu Zhu, Ziyuan Qin, Huahui Yi, Zekun Jiang, Qicheng Lao, Shaoting Zhang, Kang Li
To the best of our knowledge, this is the first work to explicitly introduce visual prompt into medical VLMs, and we successfully outperform recent state-of-the-art large models across multiple medical VQA datasets.
no code implementations • 7 Dec 2024 • Ziyuan Qin, Dongjie Cheng, Haoyu Wang, Huahui Yi, Yuting Shao, Zhiyuan Fan, Kang Li, Qicheng Lao
For the contribution on the dataset side, we generated 12, 000 synthesized images based on 1, 000 composited prompts using three advanced T2I models.
no code implementations • 25 Oct 2024 • Mengmeng Chen, Xiaohu Wu, Xiaoli Tang, Tiantian He, Yew-Soon Ong, Qiqi Liu, Qicheng Lao, Han Yu
This requires the desirable FL-PT selection strategy to simultaneously mitigate the problems of free riders and conflicts of interest among competitors.
1 code implementation • 9 Oct 2024 • Zhilong Li, Xiaohu Wu, Xiaoli Tang, Tiantian He, Yew-Soon Ong, Mengmeng Chen, Qiqi Liu, Qicheng Lao, Han Yu
The proposed framework offers useful guidance on the suitability of various data divergence measures in FL systems.
1 code implementation • 1 Sep 2024 • Xiuqi Zheng, Yuhang Zhang, Haoran Zhang, Hongrui Liang, Xueqi Bao, Zhuqing Jiang, Qicheng Lao
Adapting large pre-trained foundation models, e. g., SAM, for medical image segmentation remains a significant challenge.
1 code implementation • 29 May 2024 • Junjie Wang, Guangjing Yang, Wentao Chen, Huahui Yi, Xiaohu Wu, Zhouchen Lin, Qicheng Lao
To address these issues, a natural idea is to enhance the independence and diversity of the learning process for the low-rank matrices.
no code implementations • 28 Feb 2024 • Xiaosong Wang, Xiaofan Zhang, Guotai Wang, Junjun He, Zhongyu Li, Wentao Zhu, Yi Guo, Qi Dou, Xiaoxiao Li, Dequan Wang, Liang Hong, Qicheng Lao, Tong Ruan, Yukun Zhou, Yixue Li, Jie Zhao, Kang Li, Xin Sun, Lifeng Zhu, Shaoting Zhang
The emerging trend of advancing generalist artificial intelligence, such as GPTv4 and Gemini, has reshaped the landscape of research (academia and industry) in machine learning and many other research areas.
1 code implementation • 24 Feb 2024 • Zekun Jiang, Dongjie Cheng, Ziyuan Qin, Jun Gao, Qicheng Lao, Abdullaev Bakhrom Ismoilovich, Urazboev Gayrat, Yuldashov Elyorbek, Bekchanov Habibullo, Defu Tang, LinJing Wei, Kang Li, Le Zhang
This study presents a novel multimodal medical image zero-shot segmentation algorithm named the text-visual-prompt segment anything model (TV-SAM) without any manual annotations.
1 code implementation • 14 Jun 2023 • Qingbo Kang, Jun Gao, Kang Li, Qicheng Lao
Overall, our work highlights the potential of MAE for ultrasound image recognition and presents a novel approach that incorporates deblurring to further improve its effectiveness.
no code implementations • 30 May 2023 • Huahui Yi, Ziyuan Qin, Wei Xu, Miaotian Guo, Kun Wang, Shaoting Zhang, Kang Li, Qicheng Lao
To achieve this, we propose a Concept Embedding Search (ConES) approach by optimizing prompt embeddings -- without the need of the text encoder -- to capture the 'concept' of the image modality through a variety of task objectives.
no code implementations • 28 Apr 2023 • Dongjie Cheng, Ziyuan Qin, Zekun Jiang, Shaoting Zhang, Qicheng Lao, Kang Li
As the first promptable foundation model for segmentation tasks, it was trained on a large dataset with an unprecedented number of images and annotations.
no code implementations • 12 Mar 2023 • Huahui Yi, Ziyuan Qin, Qicheng Lao, Wei Xu, Zekun Jiang, Dequan Wang, Shaoting Zhang, Kang Li
Therefore, in this work, we further explore the possibility of leveraging pre-trained VLMs as medical foundation models for building general-purpose medical AI, where we thoroughly investigate three machine-learning paradigms, i. e., domain/task-specialized learning, joint learning, and continual learning, for training the VLMs and evaluate their generalization performance on cross-domain and cross-task test sets.
2 code implementations • 30 Sep 2022 • Ziyuan Qin, Huahui Yi, Qicheng Lao, Kang Li
The large-scale pre-trained vision language models (VLM) have shown remarkable domain transfer capability on natural images.
2 code implementations • 26 Jun 2022 • Shurui Gui, Hao Yuan, Jie Wang, Qicheng Lao, Kang Li, Shuiwang Ji
We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms.
no code implementations • 15 Jun 2021 • Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, YIngyu Liang, Somesh Jha
Compared to traditional "test-time" defenses, these defense mechanisms "dynamically retrain" the model based on test time input via transductive learning; and theoretically, attacking these defenses boils down to bilevel optimization, which seems to raise the difficulty for adaptive attacks.
no code implementations • 1 Jan 2021 • Xi Wu, Yang Guo, Tianqi Li, Jiefeng Chen, Qicheng Lao, YIngyu Liang, Somesh Jha
On the positive side, we show that, if one is allowed to access the training data, then Domain Adversarial Neural Networks (${\sf DANN}$), an algorithm designed for unsupervised domain adaptation, can provide nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive fixed point attacks.
no code implementations • 15 Dec 2020 • Qicheng Lao, Xiang Jiang, Mohammad Havaei
We propose a hypothesis disparity regularized mutual information maximization~(HDMI) approach to tackle unsupervised hypothesis transfer -- as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA) -- where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner.
no code implementations • 8 Dec 2020 • Mohammad Havaei, Ximeng Mao, Yiping Wang, Qicheng Lao
Current practices in using cGANs for medical image generation, only use a single variable for image generation (i. e., content) and therefore, do not provide much flexibility nor control over the generated image.
1 code implementation • ICML 2020 • Xiang Jiang, Qicheng Lao, Stan Matwin, Mohammad Havaei
We present an approach for unsupervised domain adaptation---with a strong focus on practical considerations of within-domain class imbalance and between-domain class distribution shift---from a class-conditioned domain alignment perspective.
Ranked #1 on
Unsupervised Domain Adaptation
on Office-Home
(Avg accuracy metric)
1 code implementation • 9 Mar 2020 • Qicheng Lao, Mehrzad Mortazavi, Marzieh Tahaei, Francis Dutil, Thomas Fevens, Mohammad Havaei
In this paper, we propose a general framework in continual learning for generative models: Feature-oriented Continual Learning (FoCL).
no code implementations • 9 Mar 2020 • Qicheng Lao, Xiang Jiang, Mohammad Havaei, Yoshua Bengio
Learning in non-stationary environments is one of the biggest challenges in machine learning.
no code implementations • ICCV 2019 • Qicheng Lao, Mohammad Havaei, Ahmad Pesaranghader, Francis Dutil, Lisa Di Jorio, Thomas Fevens
), and the style, which is usually not well described in the text (e. g., location, quantity, size, etc.).
no code implementations • 28 May 2019 • Qicheng Lao, Thomas Fevens
In practice, histopathological diagnosis of tumor malignancy often requires a human expert to scan through histopathological images at multiple magnification levels, after which a final diagnosis can be accurately determined.
no code implementations • 26 Jun 2018 • Qicheng Lao, Thomas Fevens, Boyu Wang
Unlike natural images, medical images often have intrinsic characteristics that can be leveraged for neural network learning.