Search Results for author: Ziyuan Qin

Found 7 papers, 2 papers with code

Increasing SAM Zero-Shot Performance on Multimodal Medical Images Using GPT-4 Generated Descriptive Prompts Without Human Annotation

no code implementations24 Feb 2024 Zekun Jiang, Dongjie Cheng, Ziyuan Qin, Jun Gao, Qicheng Lao, Kang Li, Le Zhang

This study develops and evaluates a novel multimodal medical image zero-shot segmentation algorithm named Text-Visual-Prompt SAM (TV-SAM) without any manual annotations.

Descriptive Language Modelling +3

Edge Generation Scheduling for DAG Tasks Using Deep Reinforcement Learning

1 code implementation28 Aug 2023 Binqi Sun, Mirco Theile, Ziyuan Qin, Daniele Bernardini, Debayan Roy, Andrea Bastoni, Marco Caccamo

Using this schedulability test, we propose a new DAG scheduling framework (edge generation scheduling -- EGS) that attempts to minimize the DAG width by iteratively generating edges while guaranteeing the deadline constraint.

reinforcement-learning Scheduling

ConES: Concept Embedding Search for Parameter Efficient Tuning Large Vision Language Models

no code implementations30 May 2023 Huahui Yi, Ziyuan Qin, Wei Xu, Miaotian Guo, Kun Wang, Shaoting Zhang, Kang Li, Qicheng Lao

To achieve this, we propose a Concept Embedding Search (ConES) approach by optimizing prompt embeddings -- without the need of the text encoder -- to capture the 'concept' of the image modality through a variety of task objectives.

Instance Segmentation Prompt Engineering +2

SAM on Medical Images: A Comprehensive Study on Three Prompt Modes

no code implementations28 Apr 2023 Dongjie Cheng, Ziyuan Qin, Zekun Jiang, Shaoting Zhang, Qicheng Lao, Kang Li

As the first promptable foundation model for segmentation tasks, it was trained on a large dataset with an unprecedented number of images and annotations.

Image Segmentation Medical Image Segmentation +2

Towards General Purpose Medical AI: Continual Learning Medical Foundation Model

no code implementations12 Mar 2023 Huahui Yi, Ziyuan Qin, Qicheng Lao, Wei Xu, Zekun Jiang, Dequan Wang, Shaoting Zhang, Kang Li

Therefore, in this work, we further explore the possibility of leveraging pre-trained VLMs as medical foundation models for building general-purpose medical AI, where we thoroughly investigate three machine-learning paradigms, i. e., domain/task-specialized learning, joint learning, and continual learning, for training the VLMs and evaluate their generalization performance on cross-domain and cross-task test sets.

Continual Learning

FADO: Feedback-Aware Double COntrolling Network for Emotional Support Conversation

no code implementations1 Nov 2022 Wei Peng, Ziyuan Qin, Yue Hu, Yuqiang Xie, Yunpeng Li

The core module in FADO consists of a dual-level feedback strategy selector and a double control reader.

Response Generation

Medical Image Understanding with Pretrained Vision Language Models: A Comprehensive Study

2 code implementations30 Sep 2022 Ziyuan Qin, Huahui Yi, Qicheng Lao, Kang Li

The large-scale pre-trained vision language models (VLM) have shown remarkable domain transfer capability on natural images.

Cannot find the paper you are looking for? You can Submit a new open access paper.