no code implementations • 15 May 2024 • Pierre Baldi, Antonios Alexos, Ian Domingo, Alireza Rahmansetayesh
However, gradient descent on the regularized error function ought to converge to a balanced state, and thus network balance can be used to assess learning progress.
no code implementations • 15 Mar 2024 • Antonios Alexos, Yu-Dai Tsai, Ian Domingo, Maryam Pishgar, Pierre Baldi
Creating controlled methods to simulate neurodegeneration in artificial intelligence (AI) is crucial for applications that emulate brain function decline and cognitive disorders.
no code implementations • 5 Mar 2024 • Antonios Alexos, Pierre Baldi
In addition to speech generation, speech editing is also a crucial task, which requires the seamless and unnoticeable integration of edited speech into synthesized speech.
no code implementations • 16 Dec 2023 • Antonios Alexos, Junze Liu, Akash Tiwari, Kshitij Bhardwaj, Sean Hayes, Pierre Baldi, Satish Bukkapatnam, Suhas Bhandarkar
In Inertial Confinement Fusion (ICF) process, roughly a 2mm spherical shell made of high density carbon is used as target for laser beams, which compress and heat it to energy levels needed for high fusion yield.
no code implementations • pproximateinference AABI Symposium 2022 • Antonios Alexos, Alex James Boyd, Stephan Mandt
Unfortunately, VI makes strong assumptions on both the factorization and functional form of the posterior.
1 code implementation • 19 Jul 2021 • Antonios Alexos, Alex Boyd, Stephan Mandt
Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option.
no code implementations • 4 Jan 2021 • Konstantinos P. Panousis, Sotirios Chatzis, Antonios Alexos, Sergios Theodoridis
The main operating principle of the introduced units lies on stochastic arguments, as the network performs posterior sampling over competing units to select the winner.
no code implementations • 18 Jun 2020 • Antonios Alexos, Konstantinos P. Panousis, Sotirios Chatzis
This work attempts to address adversarial robustness of deep networks by means of novel learning arguments.