Search Results for author: Konstantinos Pitas

Found 12 papers, 3 papers with code

Something for (almost) nothing: Improving deep ensemble calibration using unlabeled data

no code implementations4 Oct 2023 Konstantinos Pitas, Julyan Arbel

We present a method to improve the calibration of deep ensembles in the small training data regime in the presence of unlabeled data.

A Primer on Bayesian Neural Networks: Review and Debates

1 code implementation28 Sep 2023 Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, Vincent Fortuin

Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks.

Bayesian Inference

The fine print on tempered posteriors

no code implementations11 Sep 2023 Konstantinos Pitas, Julyan Arbel

Contrary to previous results, we first show that for realistic models and datasets and the tightly controlled case of the Laplace approximation to the posterior, stochasticity does not in general improve test accuracy.

Cold Posteriors through PAC-Bayes

no code implementations22 Jun 2022 Konstantinos Pitas, Julyan Arbel

We investigate the cold posterior effect through the lens of PAC-Bayes generalization bounds.

Bayesian Inference Generalization Bounds +1

On PAC-Bayes Bounds for Deep Neural Networks using the Loss Curvature

no code implementations25 Sep 2019 Konstantinos Pitas

We investigate whether it's possible to tighten PAC-Bayes bounds for deep neural networks by utilizing the Hessian of the training loss at the minimum.

Variational Inference

Dissecting Non-Vacuous Generalization Bounds based on the Mean-Field Approximation

no code implementations ICML 2020 Konstantinos Pitas

Explaining how overparametrized neural networks simultaneously achieve low risk and zero empirical risk on benchmark datasets is an open problem.

Generalization Bounds Variational Inference

The role of invariance in spectral complexity-based generalization bounds

no code implementations23 May 2019 Konstantinos Pitas, Andreas Loukas, Mike Davies, Pierre Vandergheynst

Deep convolutional neural networks (CNNs) have been shown to be able to fit a random labeling over data while still being able to generalize well for normal labels.

Generalization Bounds

Revisiting hard thresholding for DNN pruning

no code implementations21 May 2019 Konstantinos Pitas, Mike Davies, Pierre Vandergheynst

Recently developed smart pruning algorithms use the DNN response over the training set for a variety of cost functions to determine redundant network weights, leading to less accuracy degradation and possibly less retraining time.

FeTa: A DCA Pruning Algorithm with Generalization Error Guarantees

1 code implementation12 Mar 2018 Konstantinos Pitas, Mike Davies, Pierre Vandergheynst

Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers, often with little or no drop in classification accuracy.

General Classification

Cheap DNN Pruning with Performance Guarantees

no code implementations ICLR 2018 Konstantinos Pitas, Mike Davies, Pierre Vandergheynst

Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers often with little or no drop in classification accuracy.

Classification General Classification

PAC-Bayesian Margin Bounds for Convolutional Neural Networks

1 code implementation30 Dec 2017 Konstantinos Pitas, Mike Davies, Pierre Vandergheynst

Recently the generalization error of deep neural networks has been analyzed through the PAC-Bayesian framework, for the case of fully connected layers.

Cannot find the paper you are looking for? You can Submit a new open access paper.