Search Results for author: Rémi Nahon

Found 5 papers, 1 papers with code

Debiasing surgeon: fantastic weights and how to find them

no code implementations21 Mar 2024 Rémi Nahon, Ivan Luiz De Moura Matos, Van-Tam Nguyen, Enzo Tartaglione

Nowadays an ever-growing concerning phenomenon, the emergence of algorithmic biases that can lead to unfair models, emerges.

Enhanced EEG-Based Mental State Classification : A novel approach to eliminate data leakage and improve training optimization for Machine Learning

no code implementations14 Dec 2023 Maxime Girard, Rémi Nahon, Enzo Tartaglione, Van-Tam Nguyen

In this paper, we explore prior research and introduce a new methodology for classifying mental state levels based on EEG signals utilizing machine learning (ML).

EEG

Mining bias-target Alignment from Voronoi Cells

1 code implementation ICCV 2023 Rémi Nahon, Van-Tam Nguyen, Enzo Tartaglione

Despite significant research efforts, deep neural networks are still vulnerable to biases: this raises concerns about their fairness and limits their generalization.

Fairness

Optimized preprocessing and Tiny ML for Attention State Classification

no code implementations20 Mar 2023 Yinghao Wang, Rémi Nahon, Enzo Tartaglione, Pavlo Mozharovskyi, Van-Tam Nguyen

In this paper, we present a new approach to mental state classification from EEG signals by combining signal processing techniques and machine learning (ML) algorithms.

Classification Computational Efficiency +1

Improving tracking with a tracklet associator

no code implementations22 Apr 2022 Rémi Nahon, Guillaume-Alexandre Bilodeau, Gilles Pesant

In the second phase, we associate the previously constructed tracklets using a Belief Propagation Constraint Programming algorithm, where we propose various constraints that assign scores to each of the tracklets based on multiple characteristics, such as their dynamics or the distance between them in time and space.

Multiple Object Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.