Search Results for author: Thomas George

Found 10 papers, 5 papers with code

Mislabeled examples detection viewed as probing machine learning models: concepts, survey and extensive benchmark

no code implementations21 Oct 2024 Thomas George, Pierre Nodet, Alexis Bondu, Vincent Lemaire

Mislabeled examples are ubiquitous in real-world machine learning datasets, advocating the development of techniques for automatic detection.

Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty

1 code implementation19 Sep 2022 Thomas George, Guillaume Lajoie, Aristide Baratin

Among attempts at giving a theoretical account of the success of deep neural networks, a recent line of work has identified a so-called lazy training regime in which the network can be well approximated by its linearization around initialization.

Continual Learning in Deep Networks: an Analysis of the Last Layer

no code implementations3 Jun 2021 Timothée Lesort, Thomas George, Irina Rish

Our analysis and results shed light on the dynamics of the output layer in continual learning scenarios and suggest a way of selecting the best type of output layer for a given scenario.

Continual Learning

NNGeometry: Easy and Fast Fisher Information Matrices and Neural Tangent Kernels in PyTorch

no code implementations1 Jan 2021 Thomas George

Fisher Information Matrices (FIM) and Neural Tangent Kernels (NTK) are useful tools in a number of diverse applications related to neural networks.

Revisiting Loss Modelling for Unstructured Pruning

1 code implementation22 Jun 2020 César Laurent, Camille Ballas, Thomas George, Nicolas Ballas, Pascal Vincent

By removing parameters from deep neural networks, unstructured pruning methods aim at cutting down memory footprint and computational cost, while maintaining prediction accuracy.

Fast Approximate Natural Gradient Descent in a Kronecker Factored Eigenbasis

no code implementations NeurIPS 2018 Thomas George, César Laurent, Xavier Bouthillier, Nicolas Ballas, Pascal Vincent

Optimization algorithms that leverage gradient covariance information, such as variants of natural gradient descent (Amari, 1998), offer the prospect of yielding more effective descent directions.

Fast Approximate Natural Gradient Descent in a Kronecker-factored Eigenbasis

6 code implementations11 Jun 2018 Thomas George, César Laurent, Xavier Bouthillier, Nicolas Ballas, Pascal Vincent

Optimization algorithms that leverage gradient covariance information, such as variants of natural gradient descent (Amari, 1998), offer the prospect of yielding more effective descent directions.

Cannot find the paper you are looking for? You can Submit a new open access paper.