Search Results for author: Qicheng Lao

Found 18 papers, 5 papers with code

Leveraging Disease Progression Learning for Medical Image Recognition

no code implementations26 Jun 2018 Qicheng Lao, Thomas Fevens, Boyu Wang

Unlike natural images, medical images often have intrinsic characteristics that can be leveraged for neural network learning.

Case-Based Histopathological Malignancy Diagnosis using Convolutional Neural Networks

no code implementations28 May 2019 Qicheng Lao, Thomas Fevens

In practice, histopathological diagnosis of tumor malignancy often requires a human expert to scan through histopathological images at multiple magnification levels, after which a final diagnosis can be accurately determined.

General Classification

FoCL: Feature-Oriented Continual Learning for Generative Models

1 code implementation9 Mar 2020 Qicheng Lao, Mehrzad Mortazavi, Marzieh Tahaei, Francis Dutil, Thomas Fevens, Mohammad Havaei

In this paper, we propose a general framework in continual learning for generative models: Feature-oriented Continual Learning (FoCL).

Continual Learning Incremental Learning

Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation

1 code implementation ICML 2020 Xiang Jiang, Qicheng Lao, Stan Matwin, Mohammad Havaei

We present an approach for unsupervised domain adaptation---with a strong focus on practical considerations of within-domain class imbalance and between-domain class distribution shift---from a class-conditioned domain alignment perspective.

 Ranked #1 on Unsupervised Domain Adaptation on Office-Home (Avg accuracy metric)

Pseudo Label Unsupervised Domain Adaptation

Conditional Generation of Medical Images via Disentangled Adversarial Inference

no code implementations8 Dec 2020 Mohammad Havaei, Ximeng Mao, Yiping Wang, Qicheng Lao

Current practices in using cGANs for medical image generation, only use a single variable for image generation (i. e., content) and therefore, do not provide much flexibility nor control over the generated image.

Data Augmentation Disentanglement +2

Hypothesis Disparity Regularized Mutual Information Maximization

no code implementations15 Dec 2020 Qicheng Lao, Xiang Jiang, Mohammad Havaei

We propose a hypothesis disparity regularized mutual information maximization~(HDMI) approach to tackle unsupervised hypothesis transfer -- as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA) -- where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner.

Transfer Learning Unsupervised Domain Adaptation

Test-Time Adaptation and Adversarial Robustness

no code implementations1 Jan 2021 Xi Wu, Yang Guo, Tianqi Li, Jiefeng Chen, Qicheng Lao, YIngyu Liang, Somesh Jha

On the positive side, we show that, if one is allowed to access the training data, then Domain Adversarial Neural Networks (${\sf DANN}$), an algorithm designed for unsupervised domain adaptation, can provide nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive fixed point attacks.

Adversarial Robustness Test-time Adaptation +1

Towards Adversarial Robustness via Transductive Learning

no code implementations15 Jun 2021 Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, YIngyu Liang, Somesh Jha

Compared to traditional "test-time" defenses, these defense mechanisms "dynamically retrain" the model based on test time input via transductive learning; and theoretically, attacking these defenses boils down to bilevel optimization, which seems to raise the difficulty for adaptive attacks.

Adversarial Robustness Bilevel Optimization +1

FlowX: Towards Explainable Graph Neural Networks via Message Flows

2 code implementations26 Jun 2022 Shurui Gui, Hao Yuan, Jie Wang, Qicheng Lao, Kang Li, Shuiwang Ji

We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms.

Philosophy

Medical Image Understanding with Pretrained Vision Language Models: A Comprehensive Study

2 code implementations30 Sep 2022 Ziyuan Qin, Huahui Yi, Qicheng Lao, Kang Li

The large-scale pre-trained vision language models (VLM) have shown remarkable domain transfer capability on natural images.

Towards General Purpose Medical AI: Continual Learning Medical Foundation Model

no code implementations12 Mar 2023 Huahui Yi, Ziyuan Qin, Qicheng Lao, Wei Xu, Zekun Jiang, Dequan Wang, Shaoting Zhang, Kang Li

Therefore, in this work, we further explore the possibility of leveraging pre-trained VLMs as medical foundation models for building general-purpose medical AI, where we thoroughly investigate three machine-learning paradigms, i. e., domain/task-specialized learning, joint learning, and continual learning, for training the VLMs and evaluate their generalization performance on cross-domain and cross-task test sets.

Continual Learning

SAM on Medical Images: A Comprehensive Study on Three Prompt Modes

no code implementations28 Apr 2023 Dongjie Cheng, Ziyuan Qin, Zekun Jiang, Shaoting Zhang, Qicheng Lao, Kang Li

As the first promptable foundation model for segmentation tasks, it was trained on a large dataset with an unprecedented number of images and annotations.

Image Segmentation Medical Image Segmentation +2

ConES: Concept Embedding Search for Parameter Efficient Tuning Large Vision Language Models

no code implementations30 May 2023 Huahui Yi, Ziyuan Qin, Wei Xu, Miaotian Guo, Kun Wang, Shaoting Zhang, Kang Li, Qicheng Lao

To achieve this, we propose a Concept Embedding Search (ConES) approach by optimizing prompt embeddings -- without the need of the text encoder -- to capture the 'concept' of the image modality through a variety of task objectives.

Instance Segmentation Prompt Engineering +2

Deblurring Masked Autoencoder is Better Recipe for Ultrasound Image Recognition

1 code implementation14 Jun 2023 Qingbo Kang, Jun Gao, Kang Li, Qicheng Lao

Overall, our work highlights the potential of MAE for ultrasound image recognition and presents a novel approach that incorporates deblurring to further improve its effectiveness.

Deblurring Image Classification

Increasing SAM Zero-Shot Performance on Multimodal Medical Images Using GPT-4 Generated Descriptive Prompts Without Human Annotation

no code implementations24 Feb 2024 Zekun Jiang, Dongjie Cheng, Ziyuan Qin, Jun Gao, Qicheng Lao, Kang Li, Le Zhang

This study develops and evaluates a novel multimodal medical image zero-shot segmentation algorithm named Text-Visual-Prompt SAM (TV-SAM) without any manual annotations.

Descriptive Language Modelling +3

OpenMEDLab: An Open-source Platform for Multi-modality Foundation Models in Medicine

no code implementations28 Feb 2024 Xiaosong Wang, Xiaofan Zhang, Guotai Wang, Junjun He, Zhongyu Li, Wentao Zhu, Yi Guo, Qi Dou, Xiaoxiao Li, Dequan Wang, Liang Hong, Qicheng Lao, Tong Ruan, Yukun Zhou, Yixue Li, Jie Zhao, Kang Li, Xin Sun, Lifeng Zhu, Shaoting Zhang

The emerging trend of advancing generalist artificial intelligence, such as GPTv4 and Gemini, has reshaped the landscape of research (academia and industry) in machine learning and many other research areas.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.