1 code implementation • 18 Mar 2025 • Xinliang Zhang, Lei Zhu, Shuang Zeng, Hangzhou He, Ourui Fu, Zhengjian Yao, Zhaoheng Xie, Yanye Lu
Scribble-based weakly supervised semantic segmentation leverages only a few annotated pixels as labels to train a segmentation model, presenting significant potential for reducing the human labor involved in the annotation process.
no code implementations • 10 Mar 2025 • Mojtaba Vaezi, Xinliang Zhang
Non-orthogonal multiple access (NOMA) has gained significant attention as a potential next-generation multiple access technique.
1 code implementation • 9 Jan 2025 • Hangzhou He, Lei Zhu, Xinliang Zhang, Shuang Zeng, Qian Chen, Yanye Lu
Concept Bottleneck Models (CBMs) offer inherent interpretability by initially translating images into human-comprehensible concepts, followed by a linear combination of these concepts for classification.
1 code implementation • 4 Dec 2024 • Yunkai Dang, Min Zhang, Zhengyu Chen, Xinliang Zhang, Zheng Wang, Meijun Sun, Donglin Wang
In this paper, we argue that measure at such a level may not be effective enough to generalize from base to novel classes when using only a few images.
no code implementations • 19 Jun 2024 • Qian Chen, Lei Zhu, Hangzhou He, Xinliang Zhang, Shuang Zeng, Qiushi Ren, Yanye Lu
However, the incorrect pseudo-labels may corrupt the learned feature and lead to a new problem that the better the model is trained on the old task, the poorer the model performs on the new tasks.
1 code implementation • 27 Feb 2024 • Xinliang Zhang, Lei Zhu, Hangzhou He, Lujia Jin, Yanye Lu
In this study, we propose a class-driven scribble promotion network, which utilizes both scribble annotations and pseudo-labels informed by image-level classes and global semantics for supervision.
no code implementations • 23 Oct 2023 • Xinliang Zhang, Mojtaba Vaezi
The proposed structure significantly enhances the performance of the ZIC both for the perfect and imperfect CSI.
no code implementations • 21 Sep 2023 • Shuang Zeng, Lei Zhu, Xinliang Zhang, Qian Chen, Hangzhou He, Lujia Jin, Zifeng Tian, Qiushi Ren, Zhaoheng Xie, Yanye Lu
Moreover, we develop a multi-level contrastive learning strategy that integrates correspondences across feature-level, image-level, and pixel-level representations to ensure the encoder and decoder capture comprehensive details from representations of varying scales and granularities during the pre-training phase.
1 code implementation • ICCV 2023 • Chengliang Zhong, Yuhang Zheng, Yupeng Zheng, Hao Zhao, Li Yi, Xiaodong Mu, Ling Wang, Pengfei Li, Guyue Zhou, Chao Yang, Xinliang Zhang, Jian Zhao
To address this issue, the Transporter method was introduced for 2D data, which reconstructs the target frame from the source frame to incorporate both spatial and temporal information.
no code implementations • 9 Aug 2023 • Lei Zhu, Hangzhou He, Xinliang Zhang, Qian Chen, Shuang Zeng, Qiushi Ren, Yanye Lu
Existing methods adopt an online-trained classification branch to provide pseudo annotations for supervising the segmentation branch.
no code implementations • 17 Feb 2022 • Yuhan Yao, Yuhe Zhao, Yanxian Wei, Feng Zhou, Daigao Chen, Yuguang Zhang, Xi Xiao, Ming Li, Jianji Dong, Shaohua Yu, Xinliang Zhang
We demonstrate a fully-integrated multipurpose microwave frequency identification system on silicon-on-insulator platform.
no code implementations • 3 Nov 2021 • Xinliang Zhang, Mojtaba Vaezi, Timothy J. O'Shea
SVDembedded DAE largely outperforms theoretic linear precoding in terms of BER.
no code implementations • 6 Jul 2020 • Xinliang Zhang, Mojtaba Vaezi
Numerical results demonstrate that, compared to the conventional solutions, the proposed DNN-based precoder reduces on-the-fly computational complexity more than an order of magnitude while reaching near-optimal performance (99. 45\% of the averaged optimal solutions).
no code implementations • 17 Sep 2019 • Xinliang Zhang, Mojtaba Vaezi
A novel precoding method based on supervised deep neural networks is introduced for the multiple-input multiple-output Gaussian wiretap channel.