no code implementations • 17 Sep 2024 • Bowen Dong, Pan Zhou, WangMeng Zuo
We introduce LPT++, a comprehensive framework for long-tailed classification that combines parameter-efficient fine-tuning (PEFT) with a learnable model ensemble.
no code implementations • 25 Apr 2024 • Zitong Huang, Ze Chen, Bowen Dong, Chaoqi Liang, Erjin Zhou, WangMeng Zuo
Model Weight Averaging (MWA) is a technique that seeks to enhance model's performance by averaging the weights of multiple trained models.
no code implementations • 26 Feb 2024 • Bowen Dong, Guanglei Yang, WangMeng Zuo, Lei Zhang
Empirical investigations on the adaptation of existing frameworks to vanilla ViT reveal that incorporating visual adapters into ViTs or fine-tuning ViTs with distillation terms is advantageous for enhancing the segmentation capability of novel classes.
1 code implementation • 7 Feb 2024 • Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, WangMeng Zuo, Dahua Lin, Yu Qiao, Jing Shao
In the rapidly evolving landscape of Large Language Models (LLMs), ensuring robust safety measures is paramount.
no code implementations • 29 Sep 2023 • Tianyu Huang, Yihan Zeng, Bowen Dong, Hang Xu, Songcen Xu, Rynson W. H. Lau, WangMeng Zuo
To this end, an NTFGen module is proposed to model general text latent code in noisy fields.
1 code implementation • 23 Aug 2023 • Zhenyu Li, Sunqi Fan, Yu Gu, Xiuxing Li, Zhichao Duan, Bowen Dong, Ning Liu, Jianyong Wang
Knowledge base question answering (KBQA) is a critical yet challenging task due to the vast number of entities within knowledge bases and the diversity of natural language questions posed by users.
no code implementations • 2 Jun 2023 • Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei zhang, Liling Dong, Jing Gao, Jianyong Wang
In this paper, we explore the potential of LLMs such as GPT-4 to outperform traditional AI tools in dementia diagnosis.
no code implementations • 12 Mar 2023 • Bowen Dong, Jiaxi Gu, Jianhua Han, Hang Xu, WangMeng Zuo
To improve the open-world segmentation ability, we leverage omni-supervised data (i. e., panoptic segmentation data, object detection data, and image-text pairs data) into training, thus enriching the open-world segmentation ability and achieving better segmentation accuracy.
1 code implementation • CVPR 2023 • Zixian Guo, Bowen Dong, Zhilong Ji, Jinfeng Bai, Yiwen Guo, WangMeng Zuo
Nonetheless, visual data (e. g., images) is by default prerequisite for learning prompts in existing methods.
1 code implementation • ICCV 2023 • Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson W. H. Lau, Wanli Ouyang, WangMeng Zuo
To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification.
Ranked #3 on Training-free 3D Point Cloud Classification on ScanObjectNN (using extra training data)
1 code implementation • 3 Oct 2022 • Bowen Dong, Pan Zhou, Shuicheng Yan, WangMeng Zuo
For better effectiveness, we divide prompts into two groups: 1) a shared prompt for the whole long-tailed dataset to learn general features and to adapt a pretrained model into target domain; and 2) group-specific prompts to gather group-specific features for the samples which have similar features and also to empower the pretrained model with discrimination ability.
Ranked #1 on Long-tail Learning on CIFAR-100-LT (ρ=100) (using extra training data)
1 code implementation • 25 Jul 2022 • Zitong Huang, Yiping Bao, Bowen Dong, Erjin Zhou, WangMeng Zuo
Generally, with given pseudo ground-truths generated from the well-trained WSOD network, we propose a two-module iterative training algorithm to refine pseudo labels and supervise better object detector progressively.
1 code implementation • 13 Jul 2022 • Tianyu Huang, Bowen Dong, Jiaying Lin, Xiaohui Liu, Rynson W. H. Lau, WangMeng Zuo
Mirror detection aims to identify the mirror regions in the given input image.
1 code implementation • Findings (ACL) 2022 • Yuan YAO, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang
Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.
1 code implementation • 14 Mar 2022 • Bowen Dong, Pan Zhou, Shuicheng Yan, WangMeng Zuo
The few-shot learning ability of vision transformers (ViTs) is rarely investigated though heavily desired.
1 code implementation • ICCV 2021 • Bowen Dong, Zitong Huang, Yuelin Guo, Qilong Wang, Zhenxing Niu, WangMeng Zuo
In this paper, we defend the problem setting for improving localization performance by leveraging the bounding box regression knowledge from a well-annotated auxiliary dataset.
1 code implementation • COLING 2020 • Bowen Dong, Yuan YAO, Ruobing Xie, Tianyu Gao, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun
Few-shot classification requires classifiers to adapt to new classes with only a few training instances.
no code implementations • 15 May 2019 • Bowen Dong, Jiawei Zhang, Chenwei Zhang, Yang Yang, Philip S. Yu
Online knowledge libraries refer to the online data warehouses that systematically organize and categorize the knowledge-based information about different kinds of concepts and entities.
2 code implementations • 22 May 2018 • Jiawei Zhang, Bowen Dong, Philip S. Yu
This paper aims at investigating the principles, methodologies and algorithms for detecting fake news articles, creators and subjects from online social networks and evaluating the corresponding performance.