Search Results for author: Chaoyang Zhu

Found 5 papers, 4 papers with code

Deep Instruction Tuning for Segment Anything Model

1 code implementation31 Mar 2024 Xiaorui Huang, Gen Luo, Chaoyang Zhu, Bo Tong, Yiyi Zhou, Xiaoshuai Sun, Rongrong Ji

Segment Anything Model (SAM) exhibits powerful yet versatile capabilities on (un) conditional image segmentation tasks recently.

Image Segmentation Segmentation +1

A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future

1 code implementation18 Jul 2023 Chaoyang Zhu, Long Chen

By ``open-vocabulary'', we mean that the models can classify objects beyond pre-defined categories.

Knowledge Distillation object-detection +6

SeqTR: A Simple yet Universal Network for Visual Grounding

3 code implementations30 Mar 2022 Chaoyang Zhu, Yiyi Zhou, Yunhang Shen, Gen Luo, Xingjia Pan, Mingbao Lin, Chao Chen, Liujuan Cao, Xiaoshuai Sun, Rongrong Ji

In this paper, we propose a simple yet universal network termed SeqTR for visual grounding tasks, e. g., phrase localization, referring expression comprehension (REC) and segmentation (RES).

Referring Expression Referring Expression Comprehension +1

TRAR: Routing the Attention Spans in Transformer for Visual Question Answering

1 code implementation ICCV 2021 Yiyi Zhou, Tianhe Ren, Chaoyang Zhu, Xiaoshuai Sun, Jianzhuang Liu, Xinghao Ding, Mingliang Xu, Rongrong Ji

Due to the superior ability of global dependency modeling, Transformer and its variants have become the primary choice of many vision-and-language tasks.

Question Answering Referring Expression +2

Place recognition: An Overview of Vision Perspective

no code implementations17 Jun 2017 Zhiqiang Zeng, Jian Zhang, Xiaodong Wang, Yuming Chen, Chaoyang Zhu

Place recognition is one of the most fundamental topics in computer vision and robotics communities, where the task is to accurately and efficiently recognize the location of a given query image.

Image Classification Image Retrieval +4

Cannot find the paper you are looking for? You can Submit a new open access paper.