Search Results for author: David Macêdo

Found 21 papers, 15 papers with code

Towards Robust Deep Learning using Entropic Losses

2 code implementations6 Aug 2022 David Macêdo

This thesis tackles the defiant out-of-distribution detection task by proposing novel loss functions and detection scores.

Out-of-Distribution Detection

Distinction Maximization Loss: Efficiently Improving Out-of-Distribution Detection and Uncertainty Estimation by Replacing the Loss and Calibrating

1 code implementation12 May 2022 David Macêdo, Cleber Zanchettin, Teresa Ludermir

In this paper, we propose training deterministic neural networks using our DisMax loss, which works as a drop-in replacement for the usual SoftMax loss (i. e., the combination of the linear output layer, the SoftMax activation, and the cross-entropy loss).

Out-of-Distribution Detection

Multi-Cue Adaptive Emotion Recognition Network

no code implementations3 Nov 2021 Willams Costa, David Macêdo, Cleber Zanchettin, Lucas S. Figueiredo, Veronica Teichrieb

Expressing and identifying emotions through facial and physical expressions is a significant part of social interaction.

Emotion Recognition

Enhanced Isotropy Maximization Loss: Seamless and High-Performance Out-of-Distribution Detection Simply Replacing the SoftMax Loss

1 code implementation30 May 2021 David Macêdo, Teresa Ludermir

The IsoMax loss works as a drop-in replacement of the SoftMax loss (i. e., the combination of the output linear layer, the SoftMax activation, and the cross-entropy loss) because swapping the SoftMax loss with the IsoMax loss requires no changes in the model's architecture or training procedures/hyperparameters.

Out-of-Distribution Detection

Improving Entropic Out-of-Distribution Detection using Isometric Distances and the Minimum Distance Score

no code implementations NeurIPS 2021 David Macêdo, Teresa Ludermir

The entropic out-of-distribution detection solution comprises the IsoMax loss for training and the entropic score for out-of-distribution detection.

Out-of-Distribution Detection

Training Aware Sigmoidal Optimizer

no code implementations17 Feb 2021 David Macêdo, Pedro Dreyer, Teresa Ludermir, Cleber Zanchettin

We compared the proposed approach with commonly used adaptive learning rate schedules such as Adam, RMSProp, and Adagrad.

KutralNet: A Portable Deep Learning Model for Fire Recognition

1 code implementation16 Aug 2020 Angel Ayala, Bruno Fernandes, Francisco Cruz, David Macêdo, Adriano L. I. Oliveira, Cleber Zanchettin

The experiments show that our model keeps high accuracy while substantially reducing the number of parameters and flops.

Entropic Out-of-Distribution Detection: Seamless Detection of Unknown Examples

2 code implementations7 Jun 2020 David Macêdo, Tsang Ing Ren, Cleber Zanchettin, Adriano L. I. Oliveira, Teresa Ludermir

In this paper, we argue that the unsatisfactory out-of-distribution (OOD) detection performance of neural networks is mainly due to the SoftMax loss anisotropy and propensity to produce low entropy probability distributions in disagreement with the principle of maximum entropy.

General Classification Metric Learning +2

Distantly-Supervised Neural Relation Extraction with Side Information using BERT

1 code implementation29 Apr 2020 Johny Moreira, Chaina Oliveira, David Macêdo, Cleber Zanchettin, Luciano Barbosa

Considering that this method outperformed state-of-the-art baselines, in this paper, we propose a related approach to RESIDE also using additional side information, but simplifying the sentence encoding with BERT embeddings.

Relation Relation Extraction +1

AM-MobileNet1D: A Portable Model for Speaker Recognition

3 code implementations31 Mar 2020 João Antônio Chagas Nunes, David Macêdo, Cleber Zanchettin

To address this demand, we propose a portable model called Additive Margin MobileNet1D (AM-MobileNet1D) to Speaker Identification on mobile devices.

Speaker Identification Speaker Recognition

Squeezed Deep 6DoF Object Detection Using Knowledge Distillation

2 code implementations30 Mar 2020 Heitor Felix, Walber M. Rodrigues, David Macêdo, Francisco Simões, Adriano L. I. Oliveira, Veronica Teichrieb, Cleber Zanchettin

We used the LINEMOD dataset to evaluate the proposed method, and the experimental results show that the proposed method reduces the memory requirement by almost 99\% in comparison to the original architecture with the cost of reducing half the accuracy in one of the metrics.

Knowledge Distillation Object +2

Isotropy Maximization Loss and Entropic Score: Accurate, Fast, Efficient, Scalable, and Turnkey Neural Networks Out-of-Distribution Detection Based on The Principle of Maximum Entropy

1 code implementation15 Aug 2019 David Macêdo, Tsang Ing Ren, Cleber Zanchettin, Adriano L. I. Oliveira, Teresa Ludermir

Consequently, we propose IsoMax, a loss that is isotropic (distance-based) and produces high entropy (low confidence) posterior probability distributions despite still relying on cross-entropy minimization.

Data Augmentation Metric Learning +2

Spatial-Temporal Graph Convolutional Networks for Sign Language Recognition

no code implementations31 Jan 2019 Cleison Correia de Amorim, David Macêdo, Cleber Zanchettin

The recognition of sign language is a challenging task with an important role in society to facilitate the communication of deaf persons.

Sign Language Recognition

Squeezed Very Deep Convolutional Neural Networks for Text Classification

1 code implementation28 Jan 2019 Andréa B. Duque, Luã Lázaro J. Santos, David Macêdo, Cleber Zanchettin

Most of the research in convolutional neural networks has focused on increasing network depth to improve accuracy, resulting in a massive number of parameters which restricts the trained network to platforms with memory and processing constraints.

General Classification Sentiment Analysis +2

Heartbeat Anomaly Detection using Adversarial Oversampling

1 code implementation28 Jan 2019 Jefferson L. P. Lima, David Macêdo, Cleber Zanchettin

As in most health problems, the imbalance between examples and classes is predominant in this problem and affects the performance of the automated solution.

Anomaly Detection General Classification

Additive Margin SincNet for Speaker Recognition

1 code implementation28 Jan 2019 João Antônio Chagas Nunes, David Macêdo, Cleber Zanchettin

The Softmax loss function is a widely used function in deep learning methods, but it is not the best choice for all kind of problems.

Speaker Recognition

Simple Fast Convolutional Feature Learning

no code implementations ICLR 2018 David Macêdo, Cleber Zanchettin, Teresa Ludermir

The quality of the features used in visual recognition is of fundamental importance for the overall system.

Enhancing Batch Normalized Convolutional Networks using Displaced Rectifier Linear Units: A Systematic Comparative Study

no code implementations ICLR 2018 David Macêdo, Cleber Zanchettin, Adriano L. I. Oliveira, Teresa Ludermir

Besides, statistical significant performance assessments (p<0. 05) showed DReLU enhanced the test accuracy presented by ReLU in all scenarios.

Cannot find the paper you are looking for? You can Submit a new open access paper.