no code implementations • DCLRL (LREC) 2022 • Imane Guellil, Ahsan Adeel, Faical Azouaou, Mohamed Boubred, Yousra Houichi, Akram Abdelhaq Moumna
In this paper, an approach for hate speech detection against women in the Arabic community on social media (e. g. Youtube) is proposed.
1 code implementation • 11 Feb 2024 • Leandro A. Passos, Douglas Rodrigues, Danilo Jodas, Kelton A. P. Costa, Ahsan Adeel, João Paulo Papa
This paper presents BioNeRF, a biologically plausible architecture that models scenes in a 3D representation and synthesizes new views through radiance fields.
no code implementations • 16 May 2023 • Ahsan Adeel, Junaid Muzaffar, Khubaib Ahmed, Mohsin Raza
Going beyond 'dendritic democracy', we introduce a 'democracy of local processors', termed Cooperator.
no code implementations • 24 Oct 2022 • Abhijeet Bishnu, Ankit Gupta, Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Amir Hussain, Mathini Sellathurai, Tharmalingam Ratnarajah
In this paper, we design a first of its kind transceiver (PHY layer) prototype for cloud-based audio-visual (AV) speech enhancement (SE) complying with high data rate and low latency requirements of future multimodal hearing assistive technology.
no code implementations • 24 Oct 2022 • Ahsan Adeel, Adewale Adetomi, Khubaib Ahmed, Amir Hussain, Tughrul Arslan, W. A. Phillips
Context-sensitive two-point layer 5 pyramidal cells (L5PCs) were discovered as long ago as 1999.
1 code implementation • 26 Sep 2022 • Danilo Samuel Jodas, Leandro Aparecido Passos, Ahsan Adeel, João Paulo Papa
Demands for minimum parameter setup in machine learning models are desirable to avoid time-consuming optimization processes.
no code implementations • 7 Sep 2022 • Mohsin Raza, Leandro A. Passos, Ahmed Khubaib, Ahsan Adeel
This paper proposes the MBURST, a novel multimodal solution for audio-visual speech enhancements that consider the most recent neurological discoveries regarding pyramidal cells of the prefrontal cortex and other brain regions.
no code implementations • 15 Jul 2022 • Ahsan Adeel, Mario Franco, Mohsin Raza, Khubaib Ahmed
Deep learning (DL) has big-data processing capabilities that are as good, or even better, than those of humans in many real-world domains, but at the cost of high energy requirements that may be unsustainable in some applications and of errors, that, though infrequent, can be large.
no code implementations • 6 Jun 2022 • Leandro A. Passos, João Paulo Papa, Amir Hussain, Ahsan Adeel
Despite the recent success of machine learning algorithms, most models face drawbacks when considering more complex tasks requiring interaction between different sources, such as multimodal input data and logical time sequences.
no code implementations • 11 Feb 2022 • Tassadaq Hussain, Muhammad Diyan, Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Yu Tsao, Amir Hussain
Current deep learning (DL) based approaches to speech intelligibility enhancement in noisy environments are often trained to minimise the feature distance between noise-free speech and enhanced speech signals.
no code implementations • 9 Feb 2022 • Leandro Aparecido Passos, João Paulo Papa, Javier Del Ser, Amir Hussain, Ahsan Adeel
Our proposed AV CCA-GNN model deals with multimodal representation learning context.
no code implementations • 8 Feb 2022 • Tassadaq Hussain, Muhammad Diyan, Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Yu Tsao, Amir Hussain
Current deep learning (DL) based approaches to speech intelligibility enhancement in noisy environments are generally trained to minimise the distance between clean and enhanced speech features.
no code implementations • 3 Apr 2021 • Imane Guellil, Ahsan Adeel, Faical Azouaou, Mohamed Boubred, Yousra Houichi, Akram Abdelhaq Moumna
In this paper, an approach for hate speech detection against women in Arabic community on social media (e. g. Youtube) is proposed.
no code implementations • 30 Sep 2019 • Mandar Gogate, Ahsan Adeel, Kia Dashtipour, Peter Derleth, Amir Hussain
This paper presents, a first of its kind, audio-visual (AV) speech enhacement challenge in real-noisy settings.
no code implementations • 23 Sep 2019 • Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Amir Hussain
In addition, our work challenges a popular belief that a scarcity of multi-language large vocabulary AV corpus and wide variety of noises is a major bottleneck to build a robust language, speaker and noise independent SE systems.
no code implementations • 5 Nov 2018 • Ahsan Adeel
It is believed that the conscious neuron inherently contains enough knowledge about the situation in which the problem is to be solved based on past learning and reasoning and it defines the precise role of incoming multisensory signals to originate a precise neural firing (exhibiting switch-like behaviour).
no code implementations • WS 2018 • Imane Guellil, Ahsan Adeel, Faical Azouaou, Fodil Benali, Ala-eddine Hachani, Amir Hussain
Afterwards, we automatically classify the sentiment of the transliterated corpus using an automatically annotated corpus.
no code implementations • 28 Aug 2018 • Ahsan Adeel, Mandar Gogate, Amir Hussain
In this paper, we introduce a novel contextual AV switching component that contextually exploits AV cues with respect to different operating conditions to estimate clean audio, without requiring any SNR estimation.
no code implementations • 25 Aug 2018 • Fengling Jiang, Bin Kong, Ahsan Adeel, Yun Xiao, Amir Hussain
Simultaneously, foreground prior as the virtual absorbing nodes is used to calculate the absorption time and obtain the background possibility.
no code implementations • 15 Aug 2018 • Kia Dashtipour, Mandar Gogate, Ahsan Adeel, Cosimo Ieracitano, Hadi Larijani, Amir Hussain
The rise of social media is enabling people to freely express their opinions about products and services.
no code implementations • 15 Aug 2018 • Imane Guellil, Ahsan Adeel, Faical Azouaou, Amir Hussain
In this paper, we present a novel approach to automatically construct an annotated sentiment corpus for Algerian dialect (a Maghrebi Arabic dialect).
no code implementations • 31 Jul 2018 • Ahsan Adeel, Mandar Gogate, Amir Hussain, William M. Whitmer
The proposed audio-visual (AV) speech enhancement framework operates at two levels.
no code implementations • 31 Jul 2018 • Mandar Gogate, Ahsan Adeel, Ricard Marxer, Jon Barker, Amir Hussain
The process of selective attention in the brain is known to contextually exploit the available audio and visual cues to better focus on target speaker while filtering out other noises.