Search Results for author: Bowen Dong

Found 19 papers, 12 papers with code

LPT++: Efficient Training on Mixture of Long-tailed Experts

no code implementations17 Sep 2024 Bowen Dong, Pan Zhou, WangMeng Zuo

We introduce LPT++, a comprehensive framework for long-tailed classification that combines parameter-efficient fine-tuning (PEFT) with a learnable model ensemble.

parameter-efficient fine-tuning

IMWA: Iterative Model Weight Averaging Benefits Class-Imbalanced Learning Tasks

no code implementations25 Apr 2024 Zitong Huang, Ze Chen, Bowen Dong, Chaoqi Liang, Erjin Zhou, WangMeng Zuo

Model Weight Averaging (MWA) is a technique that seeks to enhance model's performance by averaging the weights of multiple trained models.

Image Classification object-detection +2

ConSept: Continual Semantic Segmentation via Adapter-based Vision Transformer

no code implementations26 Feb 2024 Bowen Dong, Guanglei Yang, WangMeng Zuo, Lei Zhang

Empirical investigations on the adaptation of existing frameworks to vanilla ViT reveal that incorporating visual adapters into ViTs or fine-tuning ViTs with distillation terms is advantageous for enhancing the segmentation capability of novel classes.

Continual Semantic Segmentation Segmentation +1

FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering

1 code implementation23 Aug 2023 Zhenyu Li, Sunqi Fan, Yu Gu, Xiuxing Li, Zhichao Duan, Bowen Dong, Ning Liu, Jianyong Wang

Knowledge base question answering (KBQA) is a critical yet challenging task due to the vast number of entities within knowledge bases and the diversity of natural language questions posed by users.

Knowledge Base Question Answering

Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today

no code implementations2 Jun 2023 Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei zhang, Liling Dong, Jing Gao, Jianyong Wang

In this paper, we explore the potential of LLMs such as GPT-4 to outperform traditional AI tools in dementia diagnosis.

Towards Universal Vision-language Omni-supervised Segmentation

no code implementations12 Mar 2023 Bowen Dong, Jiaxi Gu, Jianhua Han, Hang Xu, WangMeng Zuo

To improve the open-world segmentation ability, we leverage omni-supervised data (i. e., panoptic segmentation data, object detection data, and image-text pairs data) into training, thus enriching the open-world segmentation ability and achieving better segmentation accuracy.

Instance Segmentation object-detection +4

CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training

1 code implementation ICCV 2023 Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson W. H. Lau, Wanli Ouyang, WangMeng Zuo

To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification.

Contrastive Learning Few-Shot Learning +5

LPT: Long-tailed Prompt Tuning for Image Classification

1 code implementation3 Oct 2022 Bowen Dong, Pan Zhou, Shuicheng Yan, WangMeng Zuo

For better effectiveness, we divide prompts into two groups: 1) a shared prompt for the whole long-tailed dataset to learn general features and to adapt a pretrained model into target domain; and 2) group-specific prompts to gather group-specific features for the samples which have similar features and also to empower the pretrained model with discrimination ability.

 Ranked #1 on Long-tail Learning on CIFAR-100-LT (ρ=100) (using extra training data)

Classification Image Classification +1

W2N:Switching From Weak Supervision to Noisy Supervision for Object Detection

1 code implementation25 Jul 2022 Zitong Huang, Yiping Bao, Bowen Dong, Erjin Zhou, WangMeng Zuo

Generally, with given pseudo ground-truths generated from the well-trained WSOD network, we propose a two-module iterative training algorithm to refine pseudo labels and supervise better object detector progressively.

Object object-detection +2

Prompt Tuning for Discriminative Pre-trained Language Models

1 code implementation Findings (ACL) 2022 Yuan YAO, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang

Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.

Language Modelling Question Answering +2

Self-Promoted Supervision for Few-Shot Transformer

1 code implementation14 Mar 2022 Bowen Dong, Pan Zhou, Shuicheng Yan, WangMeng Zuo

The few-shot learning ability of vision transformers (ViTs) is rarely investigated though heavily desired.

Data Augmentation Few-Shot Learning +1

Boosting Weakly Supervised Object Detection via Learning Bounding Box Adjusters

1 code implementation ICCV 2021 Bowen Dong, Zitong Huang, Yuelin Guo, Qilong Wang, Zhenxing Niu, WangMeng Zuo

In this paper, we defend the problem setting for improving localization performance by leveraging the bounding box regression knowledge from a well-annotated auxiliary dataset.

Object object-detection +3

Missing Movie Synergistic Completion across Multiple Isomeric Online Movie Knowledge Libraries

no code implementations15 May 2019 Bowen Dong, Jiawei Zhang, Chenwei Zhang, Yang Yang, Philip S. Yu

Online knowledge libraries refer to the online data warehouses that systematically organize and categorize the knowledge-based information about different kinds of concepts and entities.

FAKEDETECTOR: Effective Fake News Detection with Deep Diffusive Neural Network

2 code implementations22 May 2018 Jiawei Zhang, Bowen Dong, Philip S. Yu

This paper aims at investigating the principles, methodologies and algorithms for detecting fake news articles, creators and subjects from online social networks and evaluating the corresponding performance.

Fake News Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.