Search Results for author: Chiheb Trabelsi

Found 10 papers, 7 papers with code

TransCAM: Transformer Attention-based CAM Refinement for Weakly Supervised Semantic Segmentation

1 code implementation14 Mar 2022 Ruiwen Li, Zheda Mai, Chiheb Trabelsi, Zhibo Zhang, Jongseong Jang, Scott Sanner

In this paper, we propose TransCAM, a Conformer-based solution to WSSS that explicitly leverages the attention weights from the transformer branch of the Conformer to refine the CAM generated from the CNN branch.

Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation

ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification

1 code implementation28 Nov 2021 Zhibo Zhang, Jongseong Jang, Chiheb Trabelsi, Ruiwen Li, Scott Sanner, Yeonjeong Jeong, Dongsub Shim

Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification.

Adversarial Robustness Classification +2

Retrieving Signals in the Frequency Domain with Deep Complex Extractors

1 code implementation25 Sep 2019 Chiheb Trabelsi, Olexa Bilaniuk, Ousmane Dia, Ying Zhang, Mirco Ravanelli, Jonathan Binas, Negar Rostamzadeh, Christopher J Pal

Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss.

Audio Source Separation

Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition

1 code implementation20 Jun 2018 Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linarès, Renato De Mori, Yoshua Bengio

Quaternion numbers and quaternion neural networks have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies, and to solve many tasks with less learning parameters than real-valued models.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Quaternion Recurrent Neural Networks

3 code implementations ICLR 2019 Titouan Parcollet, Mirco Ravanelli, Mohamed Morchid, Georges Linarès, Chiheb Trabelsi, Renato de Mori, Yoshua Bengio

Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

On orthogonality and learning RNNs with long term dependencies

no code implementations ICML 2017 Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, Chris Pal

We find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance.

Deep Complex Networks

9 code implementations ICLR 2018 Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, João Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, Christopher J. Pal

Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models.

Image Classification Music Transcription +1

On orthogonality and learning recurrent networks with long term dependencies

1 code implementation31 Jan 2017 Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, Chris Pal

We find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.