no code implementations • 1 Feb 2024 • Michele Caprio, Maryam Sultana, Eleni Elia, Fabio Cuzzolin
Statistical learning theory is the foundation of machine learning, providing theoretical bounds for the risk of models learnt from a (single) training set, assumed to issue from an unknown probability distribution.
no code implementations • 2 Dec 2023 • Yusuf Sale, Viktor Bengs, Michele Caprio, Eyke Hüllermeier
In the past couple of years, various approaches to representing and quantifying different types of predictive uncertainty in machine learning, notably in the setting of classification, have been proposed on the basis of second-order probability distributions, i. e., predictions in the form of distributions on probability distributions.
1 code implementation • 4 Oct 2023 • Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee
Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.
no code implementations • 28 Aug 2023 • Souradeep Dutta, Michele Caprio, Vivian Lin, Matthew Cleaveland, Kuk Jin Jang, Ivan Ruchkin, Oleg Sokolsky, Insup Lee
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
no code implementations • 13 Jul 2023 • Michele Caprio, Yusuf Sale, Eyke Hüllermeier, Insup Lee
In their seminal 1990 paper, Wasserman and Kadane establish an upper bound for the Bayes' posterior probability of a measurable set $A$, when the prior lies in a class of probability measures $\mathcal{P}$ and the likelihood is precise.
no code implementations • 16 Jun 2023 • Yusuf Sale, Michele Caprio, Eyke Hüllermeier
Adequate uncertainty representation and quantification have become imperative in various scientific disciplines, especially in machine learning and artificial intelligence.
1 code implementation • 24 May 2023 • Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee
Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.
no code implementations • 21 Feb 2023 • Ramneet Kaur, Xiayan Ji, Souradeep Dutta, Michele Caprio, Yahan Yang, Elena Bernardis, Oleg Sokolsky, Insup Lee
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information (e. g. training class labels).
1 code implementation • 20 Feb 2023 • Vivian Lin, Kuk Jin Jang, Souradeep Dutta, Michele Caprio, Oleg Sokolsky, Insup Lee
To aid in our estimates of Wasserstein distance, we employ dimensionality reduction through orthonormal projection.
no code implementations • 19 Feb 2023 • Michele Caprio, Souradeep Dutta, Kuk Jin Jang, Vivian Lin, Radoslav Ivanov, Oleg Sokolsky, Insup Lee
We show that CBDL is better at quantifying and disentangling different types of uncertainties than single BNNs, ensemble of BNNs, and Bayesian Model Averaging.
no code implementations • 22 Jun 2022 • Michele Caprio, Sayan Mukherjee
We state concentration inequalities for the output of the hidden layers of a stochastic deep neural network (SDNN), as well as for the output of the whole SDNN.