no code implementations • 27 Jul 2022 • Hamid Jalalzai, Elie Kadoche, Rémi Leluc, Vincent Plassier
In this paper, we develop a mean to measure the leakage of training data leveraging a quantity appearing as a proxy of the total variation of a trained model near its training samples.
1 code implementation • CVPR 2022 • Ganesh Del Grosso, Hamid Jalalzai, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida
The use of personal data for training machine learning systems comes with a privacy threat and measuring the level of privacy of a model is one of the major challenges in machine learning today.
no code implementations • 7 Apr 2021 • Stéphan Clémençon, Hamid Jalalzai, Stéphane Lhaut, Anne Sabourin, Johan Segers
The angular measure on the unit sphere characterizes the first-order dependence structure of the components of a random vector in extreme regions and is defined in terms of standardized margins.
no code implementations • 13 Aug 2020 • Hamid Jalalzai, Rémi Leluc
In the framework of multivariate Extreme Value Theory, a common characterization of extremes' dependence structure is the angular measure.
no code implementations • NeurIPS 2020 • Hamid Jalalzai, Pierre Colombo, Chloé Clavel, Eric Gaussier, Giovanna Varni, Emmanuel Vignon, Anne Sabourin
The dominant approaches to text representation in natural language rely on learning embeddings on massive corpora which have convenient properties such as compositionality and distance preservation.
Ranked #3 on Sentiment Analysis on Yelp Binary classification
no code implementations • 25 Sep 2019 • Hamid Jalalzai, Pierre Colombo, Chloé Clavel, Eric Gaussier, Giovanna Varni, Emmanuel Vignon, Anne Sabourin
The dominant approaches to sentence representation in natural language rely on learning embeddings on massive corpuses.
no code implementations • NeurIPS 2018 • Hamid Jalalzai, Stephan Clémençon, Anne Sabourin
In pattern recognition, a random label Y is to be predicted based upon observing a random vector X valued in $\mathbb{R}^d$ with d>1 by means of a classification rule with minimum probability of error.