1 code implementation • 6 Jul 2020 • Saurav Jha, Martin Schiemer, Juan Ye
Given the growing trend of continual learning techniques for deep neural networks focusing on the domain of computer vision, there is a need to identify which of these generalizes well to other tasks such as human activity recognition (HAR).
1 code implementation • 19 Apr 2021 • Saurav Jha, Martin Schiemer, Franco Zambonelli, Juan Ye
This paper aims to assess to what extent such continual learning techniques can be applied to the HAR domain.
1 code implementation • 5 Apr 2018 • Saurav Jha, Nikhil Agarwal, Suneeta Agarwal
We show that the Inception+SVM model establishes a state-of-the-art F1 score on the task of gender recognition of cartoon faces.
1 code implementation • 24 Mar 2022 • Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost Van de Weijer
In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM).
1 code implementation • 1 Sep 2022 • Saurav Jha, Dong Gong, Xuesong Wang, Richard E. Turner, Lina Yao
We shed light on their potential to bring several recent advances in other deep learning domains under one umbrella.
1 code implementation • 21 Nov 2018 • Saurav Jha, Akhilesh Sudhakar, Anil Kumar Singh
The ambiguities introduced by the recombination of morphemes constructing several possible inflections for a word makes the prediction of syntactic traits in Morphologically Rich Languages (MRLs) a notoriously complicated task.
1 code implementation • 28 Mar 2024 • Saurav Jha, Dong Gong, Lina Yao
The deterministic nature of the existing finetuning methods makes them overlook the many possible interactions across the modalities and deems them unsafe for high-risk CL tasks requiring reliable uncertainty estimation.
1 code implementation • 21 Nov 2018 • Saurav Jha, Akhilesh Sudhakar, Anil Kumar Singh
Out-of-vocabulary (OOV) words can pose serious challenges for machine translation (MT) tasks, and in particular, for low-resource language (LRL) pairs, i. e., language pairs for which few or no parallel corpora exist.
no code implementations • 21 Sep 2021 • Saurav Jha
With experiments on the WMT20 chat translation task dataset, we demonstrate that NMT confusion networks can help to reduce the perplexity of both n-gram and recurrent neural network LMs compared to those trained only on N-best translations.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • ICCV 2023 • Yun Li, Zhe Liu, Saurav Jha, Sally Cripps, Lina Yao
Open-World Compositional Zero-Shot Learning (OW-CZSL) aims to recognize new compositions of seen attributes and objects.