no code implementations • 21 Oct 2024 • Thomas George, Pierre Nodet, Alexis Bondu, Vincent Lemaire
Mislabeled examples are ubiquitous in real-world machine learning datasets, advocating the development of techniques for automatic detection.
1 code implementation • 19 Sep 2022 • Thomas George, Guillaume Lajoie, Aristide Baratin
Among attempts at giving a theoretical account of the success of deep neural networks, a recent line of work has identified a so-called lazy training regime in which the network can be well approximated by its linearization around initialization.
no code implementations • 7 Jan 2022 • Irene Li, Thomas George, Alexander Fabbri, Tammy Liao, Benjamin Chen, Rina Kawamura, Richard Zhou, Vanessa Yan, Swapnil Hingmire, Dragomir Radev
In this paper, we propose the educational resource discovery (ERD) pipeline that automates web resource discovery for novel domains.
1 code implementation • 16 Dec 2021 • Swapnil Hingmire, Irene Li, Rena Kawamura, Benjamin Chen, Alexander Fabbri, Xiangru Tang, Yixin Liu, Thomas George, Tammy Liao, Wai Pan Wong, Vanessa Yan, Richard Zhou, Girish K. Palshikar, Dragomir Radev
We propose a classification scheme -- CLICKER for CL/NLP based on the analysis of online lectures from 77 university courses on this subject.
no code implementations • 3 Jun 2021 • Timothée Lesort, Thomas George, Irina Rish
Our analysis and results shed light on the dynamics of the output layer in continual learning scenarios and suggest a way of selecting the best type of output layer for a given scenario.
no code implementations • 1 Jan 2021 • Thomas George
Fisher Information Matrices (FIM) and Neural Tangent Kernels (NTK) are useful tools in a number of diverse applications related to neural networks.
1 code implementation • NeurIPS Workshop DL-IG 2020 • Aristide Baratin, Thomas George, César Laurent, R. Devon Hjelm, Guillaume Lajoie, Pascal Vincent, Simon Lacoste-Julien
We approach the problem of implicit regularization in deep learning from a geometrical viewpoint.
1 code implementation • 22 Jun 2020 • César Laurent, Camille Ballas, Thomas George, Nicolas Ballas, Pascal Vincent
By removing parameters from deep neural networks, unstructured pruning methods aim at cutting down memory footprint and computational cost, while maintaining prediction accuracy.
no code implementations • NeurIPS 2018 • Thomas George, César Laurent, Xavier Bouthillier, Nicolas Ballas, Pascal Vincent
Optimization algorithms that leverage gradient covariance information, such as variants of natural gradient descent (Amari, 1998), offer the prospect of yielding more effective descent directions.
6 code implementations • 11 Jun 2018 • Thomas George, César Laurent, Xavier Bouthillier, Nicolas Ballas, Pascal Vincent
Optimization algorithms that leverage gradient covariance information, such as variants of natural gradient descent (Amari, 1998), offer the prospect of yielding more effective descent directions.