1 code implementation • 26 Oct 2023 • Dániel Rácz, Mihály Petreczky, András Csertán, Bálint Daróczy
Recent advances in deep learning have given us some very promising results on the generalization ability of deep neural networks, however literature still lacks a comprehensive theory explaining why heavily over-parametrized models are able to generalize well while fitting the training data.
no code implementations • 15 Oct 2023 • Domokos M. Kelen, Mihály Petreczky, Péter Kersch, András A. Benczúr
In this work, we examine Asymmetric Shapley Values (ASV), a variant of the popular SHAP additive local explanation method.
no code implementations • 7 Jul 2023 • Dániel Rácz, Mihály Petreczky, Bálint Daróczy
We consider the problem of learning Neural Ordinary Differential Equations (neural ODEs) within the context of Linear Parameter-Varying (LPV) systems in continuous-time.
no code implementations • 19 Jan 2023 • Zheming Wang, Raphaël M. Jungers, Mihály Petreczky, Bo Chen, Li Yu
In this paper, we propose an algorithm for deciding stability of switched linear systems under arbitrary switching based purely on observed output data.
no code implementations • 26 Mar 2021 • Hossam S. Abbas, Roland Tóth, Mihály Petreczky, Nader Meskin, Javad Mohammadpour Velni, Patrick J. W. Koelewijn
In the SISO case, all nonlinearities of the original system are embedded into one NL function, which is factorized, based on a proposed algorithm, to construct an LPV representation of the original NL system.