no code implementations • 4 Oct 2023 • Konstantinos Pitas, Julyan Arbel
We present a method to improve the calibration of deep ensembles in the small training data regime in the presence of unlabeled data.
1 code implementation • 28 Sep 2023 • Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, Vincent Fortuin
Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks.
no code implementations • 11 Sep 2023 • Konstantinos Pitas, Julyan Arbel
Contrary to previous results, we first show that for realistic models and datasets and the tightly controlled case of the Laplace approximation to the posterior, stochasticity does not in general improve test accuracy.
no code implementations • 19 Jul 2023 • Konstantinos Pitas
Both the raw and the preprocessed data are in the . csv format.
no code implementations • 22 Jun 2022 • Konstantinos Pitas, Julyan Arbel
We investigate the cold posterior effect through the lens of PAC-Bayes generalization bounds.
no code implementations • 25 Sep 2019 • Konstantinos Pitas
We investigate whether it's possible to tighten PAC-Bayes bounds for deep neural networks by utilizing the Hessian of the training loss at the minimum.
no code implementations • ICML 2020 • Konstantinos Pitas
Explaining how overparametrized neural networks simultaneously achieve low risk and zero empirical risk on benchmark datasets is an open problem.
no code implementations • 23 May 2019 • Konstantinos Pitas, Andreas Loukas, Mike Davies, Pierre Vandergheynst
Deep convolutional neural networks (CNNs) have been shown to be able to fit a random labeling over data while still being able to generalize well for normal labels.
no code implementations • 21 May 2019 • Konstantinos Pitas, Mike Davies, Pierre Vandergheynst
Recently developed smart pruning algorithms use the DNN response over the training set for a variety of cost functions to determine redundant network weights, leading to less accuracy degradation and possibly less retraining time.
1 code implementation • 12 Mar 2018 • Konstantinos Pitas, Mike Davies, Pierre Vandergheynst
Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers, often with little or no drop in classification accuracy.
no code implementations • ICLR 2018 • Konstantinos Pitas, Mike Davies, Pierre Vandergheynst
Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers often with little or no drop in classification accuracy.
1 code implementation • 30 Dec 2017 • Konstantinos Pitas, Mike Davies, Pierre Vandergheynst
Recently the generalization error of deep neural networks has been analyzed through the PAC-Bayesian framework, for the case of fully connected layers.