no code implementations • 2 Oct 2023 • Haozhe Sun, Isabelle Guyon, Felix Mohr, Hedi Tabia
It has become mainstream in computer vision and other machine learning domains to reuse backbone networks pre-trained on large datasets as preprocessors.
no code implementations • 2 Oct 2023 • Haozhe Sun, Isabelle Guyon
We review the notion of modularity in deep learning around three axes: data, task, and model, which characterize the life cycle of deep learning.
3 code implementations • NeurIPS 2022 • Ihsan Ullah, Dustin Carrión-Ojeda, Sergio Escalera, Isabelle Guyon, Mike Huisman, Felix Mohr, Jan N van Rijn, Haozhe Sun, Joaquin Vanschoren, Phan Anh Vu
We introduce Meta-Album, an image classification meta-dataset designed to facilitate few-shot learning, transfer learning, meta-learning, among other tasks.
no code implementations • 15 Jun 2022 • Adrian El Baz, Ihsan Ullah, Edesio Alcobaça, André C. P. L. F. Carvalho, Hong Chen, Fabio Ferreira, Henry Gouk, Chaoyu Guan, Isabelle Guyon, Timothy Hospedales, Shell Hu, Mike Huisman, Frank Hutter, Zhengying Liu, Felix Mohr, Ekrem Öztürk, Jan N. van Rijn, Haozhe Sun, Xin Wang, Wenwu Zhu
Although deep neural networks are capable of achieving performance superior to humans on various tasks, they are notorious for requiring large amounts of data and computing resources, restricting their success to domains where such resources are available.
1 code implementation • 4 Feb 2022 • Joseph Pedersen, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu, Isabelle Guyon
In both cases classification accuracy or error rate are used as the metric: Utility is evaluated with the classification accuracy of the Defender model; Privacy is evaluated with the membership prediction error of a so-called "Leave-Two-Unlabeled" LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each.
2 code implementations • 17 Jan 2022 • Haozhe Sun, Wei-Wei Tu, Isabelle Guyon
We introduce OmniPrint, a synthetic data generator of isolated printed characters, geared toward machine learning research.
1 code implementation • 4 Oct 2021 • Zhaoyang Zhu, Haozhe Sun, Chi Zhang
Adam is applied widely to train neural networks.