Search Results for author: Bálint Daróczy

Found 7 papers, 3 papers with code

Optimization dependent generalization bound for ReLU networks based on sensitivity in the tangent bundle

1 code implementation26 Oct 2023 Dániel Rácz, Mihály Petreczky, András Csertán, Bálint Daróczy

Recent advances in deep learning have given us some very promising results on the generalization ability of deep neural networks, however literature still lacks a comprehensive theory explaining why heavily over-parametrized models are able to generalize well while fitting the training data.

PAC bounds of continuous Linear Parameter-Varying systems related to neural ODEs

no code implementations7 Jul 2023 Dániel Rácz, Mihály Petreczky, Bálint Daróczy

We consider the problem of learning Neural Ordinary Differential Equations (neural ODEs) within the context of Linear Parameter-Varying (LPV) systems in continuous-time.

Gradient representations in ReLU networks as similarity functions

1 code implementation26 Oct 2021 Dániel Rácz, Bálint Daróczy

Feed-forward networks can be interpreted as mappings with linear decision surfaces at the level of the last layer.

Quantum Inspired Adaptive Boosting

no code implementations1 Feb 2021 Bálint Daróczy, Katalin Friedl, László Kabódi, Attila Pereszlényi, Dániel Szabó

Building on the quantum ensemble based classifier algorithm of Schuld and Petruccione [arXiv:1704. 02146v1], we devise equivalent classical algorithms which show that this quantum ensemble method does not have advantage over classical algorithms.

Tangent Space Sensitivity and Distribution of Linear Regions in ReLU Networks

no code implementations11 Jun 2020 Bálint Daróczy

We derive several easily computable bounds and empirical measures for feed-forward fully connected ReLU (Rectified Linear Unit) networks and connect tangent sensitivity to the distribution of the activation regions in the input space realized by the network.

Generalization Bounds

Tangent Space Separability in Feedforward Neural Networks

1 code implementation18 Dec 2019 Bálint Daróczy, Rita Aleksziev, András Benczúr

Hierarchical neural networks are exponentially more efficient than their corresponding "shallow" counterpart with the same expressive power, but involve huge number of parameters and require tedious amounts of training.

Expressive power of outer product manifolds on feed-forward neural networks

no code implementations17 Jul 2018 Bálint Daróczy, Rita Aleksziev, András Benczúr

Hierarchical neural networks are exponentially more efficient than their corresponding "shallow" counterpart with the same expressive power, but involve huge number of parameters and require tedious amounts of training.

Cannot find the paper you are looking for? You can Submit a new open access paper.