1 code implementation • ACL 2021 • Shuang Wu, Xiaoning Song, ZhenHua Feng
This paper presents a novel Multi-metadata Embedding based Cross-Transformer (MECT) to improve the performance of Chinese NER by fusing the structural information of Chinese characters.
1 code implementation • 23 Jan 2022 • Ming Dai, Enhui Zheng, ZhenHua Feng, Jiedong Zhuang, Wankou Yang
Last, we enhance the Recall@K metric and introduce a new measurement, SDM@K, to evaluate the performance of a trained model from both the retrieval and localization perspectives simultaneously.
1 code implementation • 12 May 2022 • Shuang Wu, Xiaoning Song, ZhenHua Feng, Xiao-Jun Wu
To deal with this issue, we advocate a novel lexical enhancement method, InterFormer, that effectively reduces the amount of computational and memory costs by constructing non-flat lattices.
Ranked #9 on Chinese Named Entity Recognition on Resume NER
Chinese Named Entity Recognition named-entity-recognition +2
1 code implementation • 31 Mar 2024 • Jiantao Wu, Shentong Mo, Sara Atito, ZhenHua Feng, Josef Kittler, Muhammad Awais
Recently, masked image modeling (MIM), an important self-supervised learning (SSL) method, has drawn attention for its effectiveness in learning data representation from unlabeled data.
no code implementations • 17 Jan 2021 • Shuangping Jin, ZhenHua Feng, Wankou Yang, Josef Kittler
Different from the standard BN layer that uses all the training data to calculate a single set of parameters, SepBN considers that the samples of a training dataset may belong to different sub-domains.
no code implementations • 5 Mar 2021 • Syed Safwan Khalid, Muhammad Awais, Chi-Ho Chan, ZhenHua Feng, Ammarah Farooq, Ali Akbari, Josef Kittler
One key ingredient of DCNN-based FR is the appropriate design of a loss function that ensures discrimination between various identities.
no code implementations • 29 Sep 2021 • Changbin Shao, Wenbin Li, ZhenHua Feng, Jing Huo, Yang Gao
To boost the robustness of a model against adversarial examples, adversarial training has been regarded as a benchmark method.
no code implementations • 30 Nov 2021 • Sara Atito, Muhammad Awais, Ammarah Farooq, ZhenHua Feng, Josef Kittler
In this aspect the proposed SSL frame-work MC-SSL0. 0 is a step towards Multi-Concept Self-Supervised Learning (MC-SSL) that goes beyond modelling single dominant label in an image to effectively utilise the information from all the concepts present in it.
no code implementations • 13 Aug 2022 • Ming Dai, Enhui Zheng, ZhenHua Feng, Jiahao Chen, Wankou Yang
To validate the practicality of our framework, we construct a paired dataset, namely UL14, that consists of UAV and satellite views.
no code implementations • 16 Feb 2023 • Wenjie Zhang, Xiaoning Song, ZhenHua Feng, Tianyang Xu, XiaoJun Wu
Specifically, associating natural language words that fill the masked token with semantic relation labels (\textit{e. g.} \textit{``org:founded\_by}'') is difficult.
no code implementations • 22 Aug 2023 • Jiantao Wu, Shentong Mo, Muhammad Awais, Sara Atito, ZhenHua Feng, Josef Kittler
Self-supervised pretraining (SSP) has emerged as a popular technique in machine learning, enabling the extraction of meaningful feature representations without labelled data.
no code implementations • 11 Sep 2023 • Cong Wu, Xiao-Jun Wu, Josef Kittler, Tianyang Xu, Sara Atito, Muhammad Awais, ZhenHua Feng
Contrastive learning has achieved great success in skeleton-based action recognition.
no code implementations • 2 Dec 2023 • Jiantao Wu, Shentong Mo, Sara Atito, Josef Kittler, ZhenHua Feng, Muhammad Awais
Recently, self-supervised metric learning has raised attention for the potential to learn a generic distance function.
no code implementations • 13 Jan 2024 • Peng Yue, Yaochu Jin, Xuewu Dai, ZhenHua Feng, Dongliang Cui
Train timetable rescheduling (TTR) aims to promptly restore the original operation of trains after unexpected disturbances or disruptions.