no code implementations • 4 Dec 2022 • Qihuang Zhong, Liang Ding, Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, Xinbo Gao, Chunyan Miao, Xiaoou Tang, DaCheng Tao
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard.
1 code implementation • 24 Nov 2022 • Yu-Tong Cao, Jingya Wang, Ye Shi, Baosheng Yu, DaCheng Tao
In this paper, we propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget while protecting data privacy in a decentralized learning way.
no code implementations • 24 Nov 2022 • Yu-Tong Cao, Jingya Wang, Baosheng Yu, DaCheng Tao
To further enhance the active learner via large-scale unlabelled data, we introduce multiple peer students into the active learner which is trained by a novel learning paradigm, including the In-Class Peer Study on labelled data and the Out-of-Class Peer Study on unlabelled data.
no code implementations • 18 Aug 2022 • Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, DaCheng Tao, Xing Xie
Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers to enrich the hypothesis space, then we propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
1 code implementation • 1 Aug 2022 • Yangyang Shu, Baosheng Yu, HaiMing Xu, Lingqiao Liu
In low data regimes, a network often struggles to choose the correct regions for recognition and tends to overfit spurious correlated patterns from the training data.
no code implementations • 20 Jul 2022 • Yaqian Liang, Shanshan Zhao, Baosheng Yu, Jing Zhang, Fazhi He
We first randomly mask some patches of the mesh and feed the corrupted mesh into Mesh Transformers.
no code implementations • 14 Jul 2022 • Xia Yuan, Jianping Gou, Baosheng Yu, Jiali Yu, Zhang Yi
Specifically, we design the intra-class compactness constraint on the intermediate representation at different levels to encourage the intra-class representations to be closer to each other, and eventually the learned representation becomes more discriminative.~Unlike the traditional DDL methods, during the classification stage, our DDLIC performs a layer-wise greedy optimization in a similar way to the training stage.
1 code implementation • 6 Jul 2022 • Haibo Qiu, Baosheng Yu, DaCheng Tao
However, recent projection-based methods for point cloud semantic segmentation usually utilize a vanilla late fusion strategy for the predictions of different views, failing to explore the complementary information from a geometric perspective during the representation learning.
Ranked #6 on
3D Semantic Segmentation
on SemanticKITTI
1 code implementation • 4 Apr 2022 • Zhi Hou, Baosheng Yu, Chaoyue Wang, Yibing Zhan, DaCheng Tao
Specifically, when applying the proposed module, it employs a two-stream pipeline during training, i. e., either with or without a BatchFormerV2 module, where the batchformer stream can be removed for testing.
2 code implementations • 27 Mar 2022 • Zhi Hou, Baosheng Yu, DaCheng Tao
Therefore, the proposed method enables the learning on both known and unknown HOI concepts.
Affordance Recognition
Human-Object Interaction Concept Discovery
no code implementations • 22 Mar 2022 • Guangqian Yang, Yibing Zhan, Jinlong Li, Baosheng Yu, Liu Liu, Fengxiang He
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness which further contributes to an efficient new adversarial defensive algorithm for GNNs.
1 code implementation • CVPR 2022 • Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, DaCheng Tao
Point cloud segmentation is fundamental in understanding 3D environments.
Ranked #4 on
Semantic Segmentation
on S3DIS Area5
1 code implementation • CVPR 2022 • Lixiang Ru, Yibing Zhan, Baosheng Yu, Bo Du
Motivated by the inherent consistency between the self-attention in Transformers and the semantic affinity, we propose an Affinity from Attention (AFA) module to learn semantic affinity from the multi-head self-attention (MHSA) in Transformers.
Ranked #13 on
Weakly-Supervised Semantic Segmentation
on COCO 2014 val
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
1 code implementation • CVPR 2022 • Zhi Hou, Baosheng Yu, DaCheng Tao
We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications without any bells and whistles, including the tasks of long-tailed recognition, compositional zero-shot learning, domain generalization, and contrastive learning.
Ranked #10 on
Domain Generalization
on PACS
no code implementations • 15 Feb 2022 • Yibing Zhan, Zhi Chen, Jun Yu, Baosheng Yu, DaCheng Tao, Yong Luo
As a result, HLN significantly improves the performance of scene graph generation by integrating and reasoning from object interactions, relationship interactions, and transitive inference of hyper-relationships.
1 code implementation • 18 Jan 2022 • Chao Chen, Yibing Zhan, Baosheng Yu, Liu Liu, Yong Luo, Bo Du
To address this problem, we propose Resistance Training using Prior Bias (RTPB) for the scene graph generation.
no code implementations • 22 Dec 2021 • Weigang Lu, Yibing Zhan, Binbin Lin, Ziyu Guan, Liu Liu, Baosheng Yu, Wei Zhao, Yaming Yang, DaCheng Tao
In this paper, we conduct theoretical and experimental analysis to explore the fundamental causes of performance degradation in deep GCNs: over-smoothing and gradient vanishing have a mutually reinforcing effect that causes the performance to deteriorate more quickly in deep GCNs.
no code implementations • NeurIPS 2021 • Sheng Wan, Yibing Zhan, Liu Liu, Baosheng Yu, Shirui Pan, Chen Gong
Essentially, our CGPN can enhance the learning performance of GNNs under extremely limited labels by contrastively propagating the limited labels to the entire graph.
no code implementations • 29 Sep 2021 • Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu
Graph pooling is essential in learning effective graph-level representations.
1 code implementation • ICCV 2021 • Haibo Qiu, Baosheng Yu, Dihong Gong, Zhifeng Li, Wei Liu, DaCheng Tao
We then analyze the underlying causes behind the performance gap, e. g., the poor intra-class variations and the domain gap between synthetic and real face images.
1 code implementation • ICCV 2021 • Wenyuan Xue, Baosheng Yu, Wen Wang, DaCheng Tao, Qingyong Li
A table arranging data in rows and columns is a very effective data structure, which has been widely used in business and scientific research.
no code implementations • CVPR 2021 • Cheng Wen, Baosheng Yu, DaCheng Tao
The proposed dual-generators framework thus is able to progressively learn effective point embeddings for accurate point cloud generation.
2 code implementations • CVPR 2021 • Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, DaCheng Tao
The proposed method can thus be used to 1) improve the performance of HOI detection, especially for the HOIs with unseen objects; and 2) infer the affordances of novel objects.
Ranked #2 on
Affordance Recognition
on HICO-DET(Unknown Concepts)
1 code implementation • CVPR 2021 • Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, DaCheng Tao
With the proposed object fabricator, we are able to generate large-scale HOI samples for rare and unseen categories to alleviate the open long-tailed issues in HOI detection.
Ranked #4 on
Affordance Recognition
on HICO-DET
no code implementations • 21 Jan 2021 • Liyuan Sun, Jianping Gou, Baosheng Yu, Lan Du, DaCheng Tao
However, most of the existing knowledge distillation methods consider only one type of knowledge learned from either instance features or instance relations via a specific distillation strategy in teacher-student learning.
no code implementations • ICCV 2021 • Ziye Chen, Yibing Zhan, Baosheng Yu, Mingming Gong, Bo Du
Despite their efficiency, current graph-based predictors treat all operations equally, resulting in biased topological knowledge of cell architectures.
2 code implementations • 1 Sep 2020 • Baosheng Yu, DaCheng Tao
Previous methods to overcome the sub-pixel localization problem usually rely on high-resolution heatmaps.
no code implementations • 9 Jun 2020 • Jianping Gou, Baosheng Yu, Stephen John Maybank, DaCheng Tao
To this end, a variety of model compression and acceleration techniques have been developed.
1 code implementation • 13 Nov 2019 • Yu Cao, Meng Fang, Baosheng Yu, Joey Tianyi Zhou
On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains.
1 code implementation • 11 Nov 2019 • Yang Liu, Fanyou Wu, Baosheng Yu, Zhiyuan Liu, Jieping Ye
How to build an effective large-scale traffic state prediction system is a challenging but highly valuable problem.
no code implementations • ICCV 2019 • Baosheng Yu, Dacheng Tao
Deep metric learning, in which the loss function plays a key role, has proven to be extremely useful in visual recognition tasks.
1 code implementation • ECCV 2018 • Baosheng Yu, Tongliang Liu, Mingming Gong, Changxing Ding, DaCheng Tao
Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss.
no code implementations • 9 May 2018 • Baosheng Yu, DaCheng Tao
Face detection is essential to facial analysis tasks such as facial reenactment and face recognition.