1 code implementation • 6 Jul 2020 • Saurav Jha, Martin Schiemer, Juan Ye
Given the growing trend of continual learning techniques for deep neural networks focusing on the domain of computer vision, there is a need to identify which of these generalizes well to other tasks such as human activity recognition (HAR).
1 code implementation • 19 Apr 2021 • Saurav Jha, Martin Schiemer, Franco Zambonelli, Juan Ye
This paper aims to assess to what extent such continual learning techniques can be applied to the HAR domain.
1 code implementation • 7 Jul 2022 • Chengfeng Zhou, Songchang Chen, Chenming Xu, Jun Wang, Feng Liu, Chun Zhang, Juan Ye, Hefeng Huang, Dahong Qian
In this study, we present a novel normalization technique called window normalization (WIN) to improve the model generalization on heterogeneous medical images, which is a simple yet effective alternative to existing normalization methods.
no code implementations • 7 May 2021 • Yiming Bao, Jun Wang, Tong Li, Linyan Wang, Jianwei Xu, Juan Ye, Dahong Qian
Specifically, the encoder of a DL model that is pre-trained on the source domain is used to initialize the encoder of a reconstruction model.
1 code implementation • 17 Jun 2022 • Ruilong Dan, Yunxiang Li, Yijie Wang, Gangyong Jia, Ruiquan Ge, Juan Ye, Qun Jin, Yaqi Wang
Precise and rapid categorization of images in the B-scan ultrasound modality is vital for diagnosing ocular diseases.
no code implementations • 30 Jun 2023 • Yaxiong Lei, Shijing He, Mohamed Khamis, Juan Ye
In recent years we have witnessed an increasing number of interactive systems on handheld mobile devices which utilise gaze as a single or complementary interaction modality.
no code implementations • 24 Aug 2023 • Yuheng Wang, Juan Ye, David L. Borchers
They can process large volumes of data quickly, but they do not detect all vocalisations and they do generate some false positives (vocalisations that are not from the target species).
no code implementations • 5 Oct 2023 • Martin Schiemer, Clemens JS Schaefer, Jayden Parker Vap, Mark James Horeni, Yu Emma Wang, Juan Ye, Siddharth Joshi
In this paper, we propose a technique that leverages inexpensive Hadamard transforms to enable low-precision training with only integer matrix multiplications.
no code implementations • 13 Nov 2023 • Ruiquan Ge, Xiangyang Hu, Rungen Huang, Gangyong Jia, Yaqi Wang, Renshu Gu, Changmiao Wang, Elazab Ahmed, Linyan Wang, Juan Ye, Ye Li
In TTMFN, we present a two-stream multimodal co-attention transformer module to take full advantage of the complex relationships between different modalities and the potential connections within the modalities.