1 code implementation • 17 Apr 2025 • Yu Song, Tatsuaki Goh, Yinhao Li, Jiahua Dong, Shunsuke Miyashima, Yutaro Iwamoto, Yohei Kondo, Keiji Nakajima, Yen-Wei Chen
To the best of our knowledge, this research represents the first successful attempt to address a long-standing problem in the field of time-lapse microscopy in the root meristem by proposing an accurate tracking method for Arabidopsis root nuclei.
1 code implementation • 26 Mar 2025 • Hao Fu, Hanbin Zhao, Jiahua Dong, Chao Zhang, Hui Qian
Recent pre-trained vision-language models (PT-VLMs) often face a Multi-Domain Class-Incremental Learning (MCIL) scenario in practice, where several classes and domains of multi-modal tasks are incrementally arrived.
no code implementations • 24 Mar 2025 • Meng Cao, Pengfei Hu, Yingyao Wang, Jihao Gu, Haoran Tang, Haoze Zhao, Jiahua Dong, Wangbo Yu, Ge Zhang, Ian Reid, Xiaodan Liang
Recent advancements in Large Video Language Models (LVLMs) have highlighted their potential for multi-modal understanding, yet evaluating their factual grounding in video contexts remains a critical unsolved challenge.
1 code implementation • 24 Feb 2025 • Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, Yingying Zhang, Fei Yin, Jiahua Dong, Zhijiang Guo, Le Song, Cheng-Lin Liu
Achieving human-level intelligence requires refining the transition from the fast, intuitive System 1 to the slower, more deliberate System 2 reasoning.
no code implementations • 26 Jan 2025 • Jiahang Tu, Qian Feng, Chufan Chen, Jiahua Dong, Hanbin Zhao, Chao Zhang, Hui Qian
Large-scale text-to-image (T2I) diffusion models have achieved remarkable generative performance about various concepts.
no code implementations • 15 Jan 2025 • Yichen Li, Yuying Wang, Jiahua Dong, Haozhao Wang, Yining Qi, Rui Zhang, Ruixuan Li
We revisit this problem with a large-scale benchmark and analyze the performance of state-of-the-art FCL approaches under different resource-constrained settings.
no code implementations • 6 Dec 2024 • Jiahua Dong, Tong Wu, Rui Qian, Jiaqi Wang
To this end, we propose SimC3D, a simple but effective 3D contrastive learning framework, for the first time, pretraining 3D backbones from pure RGB image data.
1 code implementation • 18 Nov 2024 • Duzhen Zhang, Yahan Yu, Chenxing Li, Jiahua Dong, Dong Yu
In a more realistic scenario, local clients receive new entity types continuously, while new local clients collecting novel data may irregularly join the global FNER training.
1 code implementation • 4 Nov 2024 • Meng Cao, Yuyang Liu, Yingfei Liu, Tiancai Wang, Jiahua Dong, Henghui Ding, Xiangyu Zhang, Ian Reid, Xiaodan Liang
In terms of methodology, we propose Continual LLaVA, a rehearsal-free method tailored for continual instruction tuning in LVLMs.
1 code implementation • 23 Oct 2024 • Jiahua Dong, Wenqi Liang, Hongliu Li, Duzhen Zhang, Meng Cao, Henghui Ding, Salman Khan, Fahad Shahbaz Khan
Moreover, they heavily suffer from catastrophic forgetting and concept neglect on old personalized concepts when continually learning a series of new concepts.
no code implementations • 8 Sep 2024 • Jiahua Dong, Yue Zhang, Qiuli Wang, Ruofeng Tong, Shihong Ying, Shaolin Gong, Xuanpu Zhang, Lanfen Lin, Yen-Wei Chen, S. Kevin Zhou
To achieve this, we devise a gaussian mixture model-based label filtering module that distinguishes noisy labels from clean labels.
1 code implementation • 4 Jul 2024 • Qian Feng, Hanbin Zhao, Chao Zhang, Jiahua Dong, Henghui Ding, Yu-Gang Jiang, Hui Qian
Prompt-fixed methods only learn a single set of prompts on one of the incremental tasks and can not handle all the incremental tasks effectively.
1 code implementation • 1 Jun 2024 • Jiahua Dong, Hui Yin, Hongliu Li, Wenbo Li, Yulun Zhang, Salman Khan, Fahad Shahbaz Khan
Experiments verify the benefits of our DHM for HSI reconstruction.
no code implementations • 22 May 2024 • Rui Sun, Haoran Duan, Jiahua Dong, Varun Ojha, Tejal Shah, Rajiv Ranjan
A key feature of RefFiL is the generation of local fine-grained prompts by our domain adaptive prompt generator, which effectively learns from local domain knowledge while maintaining distinctive boundaries on a global scale.
no code implementations • 25 Apr 2024 • Chenxi Liu, Gan Sun, Wenqi Liang, Jiahua Dong, Can Qin, Yang Cong
To deal with catastrophic forgetting amongst past learned styles, we devise a dual regularization for shared-LoRA module to optimize the direction of model update, which could regularize the diffusion model from both weight and feature aspects, respectively.
no code implementations • 1 Mar 2024 • Wenqi Liang, Gan Sun, Qian He, Yu Ren, Jiahua Dong, Yang Cong
It can continually learn observation knowledge of novel 3D scene semantics and robot manipulation skills from skill-shared and skill-specific attributes, respectively.
1 code implementation • 3 Feb 2024 • Lixu Wang, Yang Zhao, Jiahua Dong, Ating Yin, Qinbin Li, Xiao Wang, Dusit Niyato, Qi Zhu
Federated Learning (FL) is a privacy-preserving distributed learning approach that is rapidly developing in an era where privacy protection is increasingly valued.
1 code implementation • NeurIPS 2023 • Jiahua Dong, Yu-Xiong Wang
In addition to the implicit neural radiance field (NeRF) modeling, our key insight is to exploit two sources of regularization that explicitly propagate the editing information across different views, thus ensuring multi-view consistency.
no code implementations • 24 Jan 2024 • Duzhen Zhang, Yahan Yu, Jiahua Dong, Chenxing Li, Dan Su, Chenhui Chu, Dong Yu
In the past year, MultiModal Large Language Models (MM-LLMs) have undergone substantial advancements, augmenting off-the-shelf LLMs to support MM inputs or outputs via cost-effective training strategies.
no code implementations • 21 Dec 2023 • Lixu Wang, Chenxi Liu, Junfeng Guo, Jiahua Dong, Xiao Wang, Heng Huang, Qi Zhu
In a privacy-focused era, Federated Learning (FL) has emerged as a promising machine learning technique.
1 code implementation • 23 Oct 2023 • Duzhen Zhang, Wei Cong, Jiahua Dong, Yahan Yu, Xiuyi Chen, Yonggang Zhang, Zhen Fang
This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type.
Continual Named Entity Recognition
named-entity-recognition
+1
no code implementations • 8 Sep 2023 • Gan Sun, Wenqi Liang, Jiahua Dong, Jun Li, Zhengming Ding, Yang Cong
Text-to-image generative models can produce diverse high-quality images of concepts with a text prompt, which have demonstrated excellent ability in image generation, image translation, etc.
no code implementations • 24 Aug 2023 • Wenqi Liang, Gan Sun, Chenxi Liu, Jiahua Dong, Kangru Wang
Meanwhile, the current class-incremental 3D object detection methods neglect the relationships between the object localization information and category semantic information and assume all the knowledge of old model is reliable.
1 code implementation • 17 Aug 2023 • Duzhen Zhang, Hongliu Li, Wei Cong, Rongtao Xu, Jiahua Dong, Xiuyi Chen
However, INER faces the challenge of catastrophic forgetting specific for incremental learning, further aggravated by background shift (i. e., old and future entity types are labeled as the non-entity type in the current task).
1 code implementation • ICCV 2023 • Jiahua Dong, Wenqi Liang, Yang Cong, Gan Sun
To surmount the above challenges, we develop a novel Heterogeneous Forgetting Compensation (HFC) model, which can resolve heterogeneous forgetting of easy-to-forget and hard-to-forget old categories from both representation and gradient aspects.
no code implementations • 20 Jul 2023 • Wei Cong, Yang Cong, Gan Sun, Yuyang Liu, Jiahua Dong
Continual learning algorithms which keep the parameters of new tasks close to that of previous tasks, are popular in preventing catastrophic forgetting in sequential task learning settings.
no code implementations • 20 Jul 2023 • Wei Cong, Yang Cong, Jiahua Dong, Gan Sun, Henghui Ding
To tackle the above challenges, in this paper, we propose a Gradient-Semantic Compensation (GSC) model, which surmounts incremental semantic segmentation from both gradient and semantic perspectives.
no code implementations • 30 Apr 2023 • Anqi Wang, Jiahua Dong, Lik-Hang Lee, Jiachuan Shen, Pan Hui
3D shape generation techniques leveraging deep learning have garnered significant interest from both the computer vision and architectural design communities, promising to enrich the content in the virtual environment.
no code implementations • 14 Apr 2023 • Jiahua Dong, Guohua Cheng, Yue Zhang, Chengtao Peng, Yu Song, Ruofeng Tong, Lanfen Lin, Yen-Wei Chen
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis.
1 code implementation • CVPR 2023 • Jiahua Dong, Duzhen Zhang, Yang Cong, Wei Cong, Henghui Ding, Dengxin Dai
Moreover, new clients collecting novel classes may join in the global training of FSS, which further exacerbates catastrophic forgetting.
1 code implementation • 11 Mar 2023 • Jiale Zhang, Yulun Zhang, Jinjin Gu, Jiahua Dong, Linghe Kong, Xiaokang Yang
The channel-wise Transformer block performs direct global context interactions across tokens defined by channel dimension.
no code implementations • 20 Feb 2023 • Jiahua Dong, Yang Cong, Gan Sun, Lixu Wang, Lingjuan Lyu, Jun Li, Ender Konukoglu
Moreover, they cannot explore which 3D geometric characteristics are essential to alleviate the catastrophic forgetting on old classes of 3D objects.
2 code implementations • 2 Feb 2023 • Jiahua Dong, Hongliu Li, Yang Cong, Gan Sun, Yulun Zhang, Luc van Gool
These issues render global model to undergo catastrophic forgetting on old categories, when local clients receive new categories consecutively under limited memory of storing old categories.
no code implementations • 26 Oct 2022 • Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu
Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.
2 code implementations • 15 Aug 2022 • Yunge Cui, Xieyuanli Chen, Yinlong Zhang, Jiahua Dong, Qingxiao Wu, Feng Zhu
To address this limitation, we present a novel Bag of Words for real-time loop closing in 3D LiDAR SLAM, called BoW3D.
2 code implementations • 13 Jun 2022 • Yunge Cui, Yinlong Zhang, Jiahua Dong, Haibo Sun, Xieyuanli Chen, Feng Zhu
Feature extraction and matching are the basic parts of many robotic vision tasks, such as 2D or 3D object detection, recognition, and registration.
1 code implementation • CVPR 2022 • Jiahua Dong, Lixu Wang, Zhen Fang, Gan Sun, Shichao Xu, Xiao Wang, Qi Zhu
It makes the global model suffer from significant catastrophic forgetting on old classes in real-world scenarios, where local clients often collect new classes continuously and have very limited storage memory to store old classes.
1 code implementation • CVPR 2022 • Chenxin Tao, Honghui Wang, Xizhou Zhu, Jiahua Dong, Shiji Song, Gao Huang, Jifeng Dai
These methods appear to be quite different in the designed loss functions from various motivations.
1 code implementation • NeurIPS 2021 • Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, Tongliang Liu
To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models.
no code implementations • ICCV 2021 • Ronghan Chen, Yang Cong, Jiahua Dong
Shape correspondence from 3D deformation learning has attracted appealing academy interests recently.
no code implementations • 28 Dec 2020 • Tao Zhang, Yang Cong, Gan Sun, Jiahua Dong, Yuyang Liu, Zhengming Ding
More specifically, we first do partial visual and tactile features extraction from the partial visual and tactile data, respectively, and encode the extracted features in modality-specific feature subspaces.
no code implementations • 16 Dec 2020 • Jiahua Dong, Yang Cong, Gan Sun, Bingtao Ma, Lichen Wang
Moreover, the performance of advanced approaches degrades dramatically for past learned classes (i. e., catastrophic forgetting), due to the irregular and redundant geometric structures of 3D point cloud data.
no code implementations • 8 Dec 2020 • Jiahua Dong, Yang Cong, Gan Sun, Yunsheng Yang, Xiaowei Xu, Zhengming Ding
Weakly-supervised learning has attracted growing research attention on medical lesions segmentation due to significant saving in pixel-level annotation cost.
no code implementations • ECCV 2020 • Jiahua Dong, Yang Cong, Gan Sun, Yuyang Liu, Xiaowei Xu
Unsupervised domain adaptation without consuming annotation process for unlabeled target data attracts appealing interests in semantic segmentation.
no code implementations • 27 Jun 2020 • Jiahua Dong, Yang Cong, Gan Sun, Tao Zhang, Xu Tang, Xiaowei Xu
Online metric learning has been widely exploited for large-scale data classification due to the low computational cost.
no code implementations • CVPR 2020 • Jiahua Dong, Yang Cong, Gan Sun, Bineng Zhong, Xiaowei Xu
Unsupervised domain adaptation has attracted growing research attention on semantic segmentation.
no code implementations • 19 Apr 2020 • Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Ji Liu
To the end, experimental results on real-world datasets show that federated multi-task learning model is very sensitive to poisoning attacks, when the attackers either directly poison the target nodes or indirectly poison the related nodes by exploiting the communication protocol.
1 code implementation • ICCV 2019 • Jiahua Dong, Yang Cong, Gan Sun, Dongdong Hou
To better utilize these dependencies, we present a new semantic lesions representation transfer model for weakly-supervised endoscopic lesions segmentation, which can exploit useful knowledge from relevant fully-labeled diseases segmentation task to enhance the performance of target weakly-labeled lesions segmentation task.