no code implementations • 22 Oct 2021 • Wanchuang Zhu, Benjamin Zi Hao Zhao, Simon Luo, Tongliang Liu, Ke Deng
Although we know that the benign gradients and Byzantine attacked gradients are distributed differently, to detect the malicious gradients is challenging due to (1) the gradient is high-dimensional and each dimension has its unique distribution and (2) the benign gradients and the attacked gradients are always mixed (two-sample test methods cannot apply directly).
no code implementations • 29 Sep 2021 • Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama
We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in Poisson processes using projections into lower-dimensional space.
no code implementations • NeurIPS Workshop DL-IG 2020 • Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama
Learning of the model is achieved via convex optimization, thanks to the dually flat statistical manifold generated by the log-linear model.
no code implementations • NeurIPS Workshop DL-IG 2020 • Simon Luo, Sally Cripps, Mahito Sugiyama
We present a novel perspective on deep learning architectures using a partial order structure, which is naturally incorporated into the information geometric formulation of the log-linear model.
no code implementations • 16 Jun 2020 • Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama
We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in stochastic processes using lower dimensional projections.
no code implementations • 9 Dec 2019 • Harrison Nguyen, Simon Luo, Fabio Ramos
On the other hand, there is smaller fraction of examples that contain all modalities (\emph{paired} data) and furthermore each modality is high dimensional when compared to number of datapoints.
1 code implementation • 25 Sep 2019 • Simon Luo, Lamiae Azizi, Mahito Sugiyama
We present a novel blind source separation (BSS) method, called information geometric blind source separation (IGBSS).
1 code implementation • 28 Jun 2019 • Simon Luo, Mahito Sugiyama
However, it is well known that increasing the number of parameters also increases the complexity of the model which leads to a bias-variance trade-off.