no code implementations • 4 Apr 2016 • Valentina Zantedeschi, Rémi Emonet, Marc Sebban
Many theoretical results in the machine learning domain stand only for functions that are Lipschitz continuous.
no code implementations • CVPR 2016 • Valentina Zantedeschi, Remi Emonet, Marc Sebban
Over the past ten years, metric learning allowed the improvement of the numerous machine learning approaches that manipulate distances or similarities.
no code implementations • NeurIPS 2016 • Valentina Zantedeschi, Rémi Emonet, Marc Sebban
During the past few years, the machine learning community has paid attention to developping new methods for learning from weakly labeled data.
no code implementations • 1 Mar 2017 • Valentina Zantedeschi, Rémi Emonet, Marc Sebban
For their ability to capture non-linearities in the data and to scale to large training sets, local Support Vector Machines (SVMs) have received a special attention during the past decade.
no code implementations • 21 Jul 2017 • Valentina Zantedeschi, Maria-Irina Nicolae, Ambrish Rawat
Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat.
5 code implementations • 3 Jul 2018 • Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards
Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary.
1 code implementation • 24 Jan 2019 • Valentina Zantedeschi, Aurélien Bellet, Marc Tommasi
We consider the fully decentralized machine learning scenario where many users with personal datasets collaborate to learn models through local peer-to-peer exchanges, without a central coordinator.
no code implementations • 14 Jun 2019 • Léo Gautheron, Pascal Germain, Amaury Habrard, Emilie Morvant, Marc Sebban, Valentina Zantedeschi
Unlike state-of-the-art Multiple Kernel Learning techniques that make use of a pre-computed dictionary of kernel functions to select from, at each iteration we fit a kernel by approximating it as a weighted sum of Random Fourier Features (RFF) and by optimizing their barycenter.
1 code implementation • 5 Nov 2019 • Valentina Zantedeschi, Fabrizio Falasca, Alyson Douglas, Richard Strange, Matt J. Kusner, Duncan Watson-Parris
One of the greatest sources of uncertainty in future climate projections comes from limitations in modelling clouds and in understanding how different cloud types interact with the climate system.
no code implementations • 28 Sep 2020 • Valentina Zantedeschi, Matt Kusner, Vlad Niculae
In this work we derive a novel sparse relaxation for binary tree learning.
1 code implementation • 9 Oct 2020 • Valentina Zantedeschi, Matt J. Kusner, Vlad Niculae
We address the problem of learning binary decision trees that partition data for some downstream task.
no code implementations • 17 Dec 2020 • Christian Schroeder de Witt, Catherine Tong, Valentina Zantedeschi, Daniele De Martini, Freddie Kalaitzis, Matthew Chantry, Duncan Watson-Parris, Piotr Bilinski
Extreme precipitation events, such as violent rainfall and hail storms, routinely ravage economies and livelihoods around the developing world.
1 code implementation • NeurIPS 2021 • Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Rémi Emonet, Amaury Habrard, Pascal Germain, Benjamin Guedj
We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties.
1 code implementation • 4 Nov 2021 • Vít Růžička, Anna Vaughan, Daniele De Martini, James Fulton, Valentina Salvatelli, Chris Bridges, Gonzalo Mateo-Garcia, Valentina Zantedeschi
In this paper, we introduce RaVAEn, a lightweight, unsupervised approach for change detection in satellite data based on Variational Auto-Encoders (VAEs) with the specific purpose of on-board deployment.
1 code implementation • 9 Jun 2022 • Felix Biggs, Valentina Zantedeschi, Benjamin Guedj
We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory.
no code implementations • 27 Oct 2022 • Andrew J. Wren, Pasquale Minervini, Luca Franceschi, Valentina Zantedeschi
Recently continuous relaxations have been proposed in order to learn Directed Acyclic Graphs (DAGs) from data by backpropagation, instead of using combinatorial optimization.
1 code implementation • 27 Jan 2023 • Valentina Zantedeschi, Luca Franceschi, Jean Kaddour, Matt J. Kusner, Vlad Niculae
We propose a continuous optimization framework for discovering a latent directed acyclic graph (DAG) from observational data.
1 code implementation • 19 Apr 2023 • Étienne Marcotte, Valentina Zantedeschi, Alexandre Drouin, Nicolas Chapados
Multivariate probabilistic time series forecasts are commonly evaluated via proper scoring rules, i. e., functions that are minimal in expectation for the ground-truth distribution.
1 code implementation • 5 Jul 2023 • Stephanie Long, Alexandre Piché, Valentina Zantedeschi, Tibor Schuster, Alexandre Drouin
Understanding the causal relationships that underlie a system is a fundamental prerequisite to accurate decision-making.
1 code implementation • 2 Oct 2023 • Arjun Ashok, Étienne Marcotte, Valentina Zantedeschi, Nicolas Chapados, Alexandre Drouin
We introduce a new model for multivariate probabilistic time series prediction, designed to flexibly address a range of tasks including forecasting, interpolation, and their combinations.
1 code implementation • 12 Oct 2023 • Kashif Rasul, Arjun Ashok, Andrew Robert Williams, Hena Ghonia, Rishika Bhagwatkar, Arian Khorasani, Mohammad Javad Darvishi Bayazi, George Adamopoulos, Roland Riachi, Nadhir Hassen, Marin Biloš, Sahil Garg, Anderson Schneider, Nicolas Chapados, Alexandre Drouin, Valentina Zantedeschi, Yuriy Nevmyvaka, Irina Rish
Over the past years, foundation models have caused a paradigm shift in machine learning due to their unprecedented capabilities for zero-shot and few-shot generalization.
no code implementations • 21 Dec 2023 • Issam Laradji, Perouz Taslakian, Sai Rajeswar, Valentina Zantedeschi, Alexandre Lacoste, Nicolas Chapados, David Vazquez, Christopher Pal, Alexandre Drouin
The extraction of a small number of relevant insights from vast amounts of data is a crucial component of data-driven decision-making.
1 code implementation • 19 Feb 2024 • Paul Viallard, Rémi Emonet, Amaury Habrard, Emilie Morvant, Valentina Zantedeschi
In statistical learning theory, a generalization bound usually involves a complexity measure imposed by the considered theoretical framework.