1 code implementation • 21 Nov 2024 • Xian-Xian Liu, Mingkun Xu, Yuanyuan Wei, Huafeng Qin, Qun Song, Simon Fong, Feng Tien, Wei Luo, Juntao Gao, Zhihua Zhang, Shirley Siu
Timely and precise classification and segmentation of gastric bleeding in endoscopic imagery are pivotal for the rapid diagnosis and intervention of gastric complications, which is critical in life-saving medical procedures.
no code implementations • 14 Oct 2024 • Changqing Gong, Huafeng Qin, Mounîm A. El-Yacoubi
Alzheimer's Disease (AD) is a prevalent neurodegenerative condition where early detection is vital.
no code implementations • 22 Sep 2024 • Huafeng Qin, Hongyu Zhu, Xin Jin, Xin Yu, Mounim A. El-Yacoubi, Shuqiang Yang
First, we define a supernet and propose a global and local alternate Neural Architecture Search method to search the optimal architecture alternately with a differentiable neural architecture search.
no code implementations • 18 Sep 2024 • Hongyu Zhu, Xin Jin, Hongchao Liao, Yan Xiang, Mounim A. El-Yacoubi, Huafeng Qin
Eye movement biometrics is a secure and innovative identification method.
1 code implementation • 8 Sep 2024 • Xin Jin, Hongyu Zhu, Siyuan Li, Zedong Wang, Zicheng Liu, Chang Yu, Huafeng Qin, Stan Z. Li
As Deep Neural Networks have achieved thrilling breakthroughs in the past decade, data augmentations have garnered increasing attention as regularization techniques when massive labeled data are unavailable.
no code implementations • 20 Aug 2024 • Huafeng Qin, Yuming Fu, Huiyan Zhang, Mounim A. El-Yacoubi, Xinbo Gao, Qun Song, Jun Wang
At the testing stage, given an adversarial sample, the MsMemoryGAN retrieves its most relevant normal patterns in memory for the reconstruction.
no code implementations • 11 Aug 2024 • Huafeng Qin, Yuming Fu, Jing Chen, Mounim A. El-Yacoubi, Xinbo Gao, Feng Xi
In this paper, first, we propose a hybrid network structure named Global-local Vision Mamba (GLVM), to learn the local correlations in images explicitly and global dependencies among tokens for vein feature representation.
2 code implementations • 10 Jul 2024 • Huafeng Qin, Xin Jin, Hongyu Zhu, Hongchao Liao, Mounîm A. El-Yacoubi, Xinbo Gao
Mixup data augmentation approaches have been applied for various tasks of deep learning to improve the generalization ability of deep neural networks.
no code implementations • 21 May 2024 • Xin Jin, Hongyu Zhu, Mounîm A. El Yacoubi, Haiyang Li, Hongchao Liao, Huafeng Qin, Yun Jiang
To enable CNNs to capture comprehensive feature representations from palm-vein images, we explored the effect of convolutional kernel size on the performance of palm-vein identification networks and designed LaKNet, a network leveraging large kernel convolution and gating mechanism.
no code implementations • 16 Jan 2024 • Huafeng Qin, Yiquan Wu, Mounim A. El-Yacoubi, Jun Wang, Guangxiang Yang
To overcome this problem, in this paper, we propose an adversarial masking contrastive learning (AMCL) approach, that generates challenging samples to train a more robust contrastive learning model for the downstream palm-vein recognition task, by alternatively optimizing the encoder in the contrastive learning model and a set of latent variables.
no code implementations • 10 Jan 2024 • Huafeng Qin, Hongyu Zhu, Xin Jin, Qun Song, Mounim A. El-Yacoubi, Xinbo Gao
To this end, we propose a mixed block consisting of three modules, transformer, attention Long short-term memory (attention LSTM), and Fourier transformer.
2 code implementations • 19 Dec 2023 • Huafeng Qin, Xin Jin, Yun Jiang, Mounim A. El-Yacoubi, Xinbo Gao
In this paper, we propose AdAutomixup, an adversarial automatic mixup augmentation approach that generates challenging samples to train a robust classifier for image classification, by alternatively optimizing the classifier and the mixup sample generator.
no code implementations • IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2021 • Huafeng Qin, Mounim A. El-Y acoubi, Y a n t a o L i, Member, IEEE, and Chongwen Liu
Despite recent advances of deep neural networks in hand vein identification, the existing solutions assume the availability of a large and rich set of training image samples.