Search Results for author: Bin Tang

Found 4 papers, 3 papers with code

HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection

1 code implementation8 Jan 2023 Bin Tang, Zhengyi Liu, Yacheng Tan, Qian He

To solve the second problem, a dual-direction short connection fusion module is used to optimize the output features of HRFormer, thereby enhancing the detailed representation of objects at the output level.

object-detection Object Detection +1

Text Editing as Imitation Game

1 code implementation21 Oct 2022 Ning Shi, Bin Tang, Bo Yuan, Longtao Huang, Yewen Pu, Jie Fu, Zhouhan Lin

Text editing, such as grammatical error correction, arises naturally from imperfect textual data.

Action Generation Grammatical Error Correction +1

Contrastive Psudo-supervised Classification for Intra-Pulse Modulation of Radar Emitter Signals Using data augmentation

no code implementations13 Oct 2022 HanCong Feng, XinHai Yan, Kaili Jiang, Xinyu Zhao, Bin Tang

The automatic classification of radar waveform is a fundamental technique in electronic countermeasures (ECM). Recent supervised deep learning-based methods have achieved great success in a such classification task. However, those methods require enough labeled samples to work properly and in many circumstances, it is not available. To tackle this problem, in this paper, we propose a three-stages deep radar waveform clustering(DRSC) technique to automatically group the received signal samples without labels. Firstly, a pretext model is trained in a self-supervised way with the help of several data augmentation techniques to extract the class-dependent features. Next, the pseudo-supervised contrastive training is involved to further promote the separation between the extracted class-dependent features. And finally, the unsupervised problem is converted to a semi-supervised classification problem via pseudo label generation.

Classification Data Augmentation +1

TriTransNet: RGB-D Salient Object Detection with a Triplet Transformer Embedding Network

1 code implementation9 Aug 2021 Zhengyi Liu, YuAn Wang, Zhengzheng Tu, Yun Xiao, Bin Tang

In view of the more contribution of high-level features for the performance, we propose a triplet transformer embedding module to enhance them by learning long-range dependencies across layers.

object-detection RGB-D Salient Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.