no code implementations • 2 Apr 2024 • Lin Li, Jianping Gou, Baosheng Yu, Lan Du, Zhang Yiand Dacheng Tao
Federated Learning (FL) seeks to train a model collaboratively without sharing private training data from individual clients.
no code implementations • 20 Feb 2024 • Zhiyao Ren, Yibing Zhan, Baosheng Yu, Liang Ding, DaCheng Tao
The copilot framework, which aims to enhance and tailor large language models (LLMs) for specific complex tasks without requiring fine-tuning, is gaining increasing attention from the community.
1 code implementation • 11 Oct 2023 • Haibo Qiu, Baosheng Yu, Yixin Chen, DaCheng Tao
Significant progress has been made recently in point cloud segmentation utilizing an encoder-decoder framework, which initially encodes point clouds into low-resolution representations and subsequently decodes high-resolution predictions.
Ranked #5 on Semantic Segmentation on ScanNet
no code implementations • 15 Aug 2023 • Wenyuan Xue, Dapeng Chen, Baosheng Yu, Yifei Chen, Sai Zhou, Wei Peng
Visual chart recognition systems are gaining increasing attention due to the growing demand for automatically identifying table headers and values from chart images.
1 code implementation • 22 Jul 2023 • Cheng Wen, Baosheng Yu, Rao Fu, DaCheng Tao
A generative model for high-fidelity point clouds is of great importance in synthesizing 3d environments for applications such as autonomous driving and robotics.
no code implementations • 13 Jul 2023 • Haoran Wang, Qinghua Cheng, Baosheng Yu, Yibing Zhan, Dapeng Tao, Liang Ding, Haibin Ling
We evaluated our method on three popular egocentric action recognition datasets, Something-Something V2, H2O, and EPIC-KITCHENS-100, and the experimental results demonstrate the effectiveness of the proposed method for handling data scarcity problems, including long-tailed and few-shot egocentric action recognition.
1 code implementation • 22 Jun 2023 • Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu
A pooling operation is essential for effective graph-level representation learning, where the node drop pooling has become one mainstream graph pooling technology.
1 code implementation • 2 Jun 2023 • Haibo Qiu, Baosheng Yu, DaCheng Tao
In this paper, we propose a new transformer network equipped with a collect-and-distribute mechanism to communicate short- and long-range contexts of point clouds, which we refer to as CDFormer.
no code implementations • 27 Apr 2023 • Zhi Hou, Baosheng Yu, DaCheng Tao
Human-object interactions (HOIs) are crucial for human-centric scene understanding applications such as human-centric visual generation, AR/VR, and robotics.
no code implementations • 19 Feb 2023 • Weigang Lu, Ziyu Guan, Wei Zhao, Yaming Yang, Yuanhai Lv, Lining Xing, Baosheng Yu, DaCheng Tao
Pseudo Labeling is a technique used to improve the performance of semi-supervised Graph Neural Networks (GNNs) by generating additional pseudo-labels based on confident predictions.
no code implementations • 10 Feb 2023 • Cheng Wen, Jianzhi Long, Baosheng Yu, DaCheng Tao
In this paper, we introduce a new method, PointWavelet, to explore local graphs in the spectral domain via a learnable graph wavelet transform.
no code implementations • CVPR 2023 • Cheng Wen, Baosheng Yu, DaCheng Tao
In this paper, we introduce a new skeleton-aware learning-to-sample method by learning object skeletons as the prior knowledge to preserve the object geometry and topology information during sampling.
no code implementations • 4 Dec 2022 • Qihuang Zhong, Liang Ding, Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, Xinbo Gao, Chunyan Miao, Xiaoou Tang, DaCheng Tao
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard.
Ranked #1 on Common Sense Reasoning on ReCoRD
no code implementations • 24 Nov 2022 • Yu-Tong Cao, Jingya Wang, Baosheng Yu, DaCheng Tao
To further enhance the active learner via large-scale unlabelled data, we introduce multiple peer students into the active learner which is trained by a novel learning paradigm, including the In-Class Peer Study on labelled data and the Out-of-Class Peer Study on unlabelled data.
2 code implementations • ICCV 2023 • Yu-Tong Cao, Ye Shi, Baosheng Yu, Jingya Wang, DaCheng Tao
In this paper, we propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget while protecting data privacy in a decentralized learning way.
1 code implementation • 18 Aug 2022 • Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, DaCheng Tao, Xing Xie
Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers to enrich the hypothesis space, then we propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
1 code implementation • 1 Aug 2022 • Yangyang Shu, Baosheng Yu, HaiMing Xu, Lingqiao Liu
In low data regimes, a network often struggles to choose the correct regions for recognition and tends to overfit spurious correlated patterns from the training data.
no code implementations • 20 Jul 2022 • Yaqian Liang, Shanshan Zhao, Baosheng Yu, Jing Zhang, Fazhi He
We first randomly mask some patches of the mesh and feed the corrupted mesh into Mesh Transformers.
no code implementations • 14 Jul 2022 • Xia Yuan, Jianping Gou, Baosheng Yu, Jiali Yu, Zhang Yi
Specifically, we design the intra-class compactness constraint on the intermediate representation at different levels to encourage the intra-class representations to be closer to each other, and eventually the learned representation becomes more discriminative.~Unlike the traditional DDL methods, during the classification stage, our DDLIC performs a layer-wise greedy optimization in a similar way to the training stage.
1 code implementation • 6 Jul 2022 • Haibo Qiu, Baosheng Yu, DaCheng Tao
However, recent projection-based methods for point cloud semantic segmentation usually utilize a vanilla late fusion strategy for the predictions of different views, failing to explore the complementary information from a geometric perspective during the representation learning.
Ranked #1 on Robust 3D Semantic Segmentation on nuScenes-C
1 code implementation • 4 Apr 2022 • Zhi Hou, Baosheng Yu, Chaoyue Wang, Yibing Zhan, DaCheng Tao
Specifically, when applying the proposed module, it employs a two-stream pipeline during training, i. e., either with or without a BatchFormerV2 module, where the batchformer stream can be removed for testing.
2 code implementations • 27 Mar 2022 • Zhi Hou, Baosheng Yu, DaCheng Tao
Therefore, the proposed method enables the learning on both known and unknown HOI concepts.
Affordance Recognition Human-Object Interaction Concept Discovery +1
no code implementations • 22 Mar 2022 • Guangqian Yang, Yibing Zhan, Jinlong Li, Baosheng Yu, Liu Liu, Fengxiang He
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness which further contributes to an efficient new adversarial defensive algorithm for GNNs.
1 code implementation • CVPR 2022 • Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, DaCheng Tao
Point cloud segmentation is fundamental in understanding 3D environments.
Ranked #17 on Semantic Segmentation on S3DIS
1 code implementation • CVPR 2022 • Lixiang Ru, Yibing Zhan, Baosheng Yu, Bo Du
Motivated by the inherent consistency between the self-attention in Transformers and the semantic affinity, we propose an Affinity from Attention (AFA) module to learn semantic affinity from the multi-head self-attention (MHSA) in Transformers.
Ranked #28 on Weakly-Supervised Semantic Segmentation on COCO 2014 val
Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation
1 code implementation • CVPR 2022 • Zhi Hou, Baosheng Yu, DaCheng Tao
We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications without any bells and whistles, including the tasks of long-tailed recognition, compositional zero-shot learning, domain generalization, and contrastive learning.
Ranked #17 on Long-tail Learning on iNaturalist 2018
no code implementations • 15 Feb 2022 • Yibing Zhan, Zhi Chen, Jun Yu, Baosheng Yu, DaCheng Tao, Yong Luo
As a result, HLN significantly improves the performance of scene graph generation by integrating and reasoning from object interactions, relationship interactions, and transitive inference of hyper-relationships.
1 code implementation • 18 Jan 2022 • Chao Chen, Yibing Zhan, Baosheng Yu, Liu Liu, Yong Luo, Bo Du
To address this problem, we propose Resistance Training using Prior Bias (RTPB) for the scene graph generation.
1 code implementation • 22 Dec 2021 • Weigang Lu, Yibing Zhan, Binbin Lin, Ziyu Guan, Liu Liu, Baosheng Yu, Wei Zhao, Yaming Yang, DaCheng Tao
In this paper, we conduct theoretical and experimental analysis to explore the fundamental causes of performance degradation in deep GCNs: over-smoothing and gradient vanishing have a mutually reinforcing effect that causes the performance to deteriorate more quickly in deep GCNs.
no code implementations • NeurIPS 2021 • Sheng Wan, Yibing Zhan, Liu Liu, Baosheng Yu, Shirui Pan, Chen Gong
Essentially, our CGPN can enhance the learning performance of GNNs under extremely limited labels by contrastively propagating the limited labels to the entire graph.
1 code implementation • ICCV 2021 • Haibo Qiu, Baosheng Yu, Dihong Gong, Zhifeng Li, Wei Liu, DaCheng Tao
We then analyze the underlying causes behind the performance gap, e. g., the poor intra-class variations and the domain gap between synthetic and real face images.
1 code implementation • ICCV 2021 • Wenyuan Xue, Baosheng Yu, Wen Wang, DaCheng Tao, Qingyong Li
A table arranging data in rows and columns is a very effective data structure, which has been widely used in business and scientific research.
no code implementations • CVPR 2021 • Cheng Wen, Baosheng Yu, DaCheng Tao
The proposed dual-generators framework thus is able to progressively learn effective point embeddings for accurate point cloud generation.
2 code implementations • CVPR 2021 • Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, DaCheng Tao
The proposed method can thus be used to 1) improve the performance of HOI detection, especially for the HOIs with unseen objects; and 2) infer the affordances of novel objects.
Ranked #2 on Affordance Recognition on HICO-DET(Unknown Concepts)
1 code implementation • CVPR 2021 • Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, DaCheng Tao
With the proposed object fabricator, we are able to generate large-scale HOI samples for rare and unseen categories to alleviate the open long-tailed issues in HOI detection.
Ranked #4 on Affordance Recognition on HICO-DET
no code implementations • 21 Jan 2021 • Liyuan Sun, Jianping Gou, Baosheng Yu, Lan Du, DaCheng Tao
However, most of the existing knowledge distillation methods consider only one type of knowledge learned from either instance features or instance relations via a specific distillation strategy in teacher-student learning.
no code implementations • ICCV 2021 • Ziye Chen, Yibing Zhan, Baosheng Yu, Mingming Gong, Bo Du
Despite their efficiency, current graph-based predictors treat all operations equally, resulting in biased topological knowledge of cell architectures.
2 code implementations • 1 Sep 2020 • Baosheng Yu, DaCheng Tao
Previous methods to overcome the sub-pixel localization problem usually rely on high-resolution heatmaps.
no code implementations • 9 Jun 2020 • Jianping Gou, Baosheng Yu, Stephen John Maybank, DaCheng Tao
To this end, a variety of model compression and acceleration techniques have been developed.
1 code implementation • 13 Nov 2019 • Yu Cao, Meng Fang, Baosheng Yu, Joey Tianyi Zhou
On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains.
1 code implementation • 11 Nov 2019 • Yang Liu, Fanyou Wu, Baosheng Yu, Zhiyuan Liu, Jieping Ye
How to build an effective large-scale traffic state prediction system is a challenging but highly valuable problem.
no code implementations • ICCV 2019 • Baosheng Yu, Dacheng Tao
Deep metric learning, in which the loss function plays a key role, has proven to be extremely useful in visual recognition tasks.
1 code implementation • ECCV 2018 • Baosheng Yu, Tongliang Liu, Mingming Gong, Changxing Ding, DaCheng Tao
Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss.
no code implementations • 9 May 2018 • Baosheng Yu, DaCheng Tao
Face detection is essential to facial analysis tasks such as facial reenactment and face recognition.