Search Results for author: Simon Luo

Found 8 papers, 2 papers with code

MANDERA: Malicious Node Detection in Federated Learning via Ranking

no code implementations22 Oct 2021 Wanchuang Zhu, Benjamin Zi Hao Zhao, Simon Luo, Ke Deng

Federated learning is a distributed learning paradigm which seeks to preserve the privacy of each participating node's data.

Federated Learning

Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Poisson Processes

no code implementations29 Sep 2021 Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama

We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in Poisson processes using projections into lower-dimensional space.

Additive models

A Deep Architecture for Log-Linear Models

no code implementations NeurIPS Workshop DL-IG 2020 Simon Luo, Sally Cripps, Mahito Sugiyama

We present a novel perspective on deep learning architectures using a partial order structure, which is naturally incorporated into the information geometric formulation of the log-linear model.

Learning Joint Intensity in a Multivariate Poisson Process on Statistical Manifolds

no code implementations NeurIPS Workshop DL-IG 2020 Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama

Learning of the model is achieved via convex optimization, thanks to the dually flat statistical manifold generated by the log-linear model.

Additive models

Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Stochastic Processes

no code implementations16 Jun 2020 Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama

We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in stochastic processes using lower dimensional projections.

Additive models

Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training

no code implementations9 Dec 2019 Harrison Nguyen, Simon Luo, Fabio Ramos

On the other hand, there is smaller fraction of examples that contain all modalities (\emph{paired} data) and furthermore each modality is high dimensional when compared to number of datapoints.

Hierarchical Probabilistic Model for Blind Source Separation via Legendre Transformation

1 code implementation25 Sep 2019 Simon Luo, Lamiae Azizi, Mahito Sugiyama

We present a novel blind source separation (BSS) method, called information geometric blind source separation (IGBSS).

Time Series

Bias-Variance Trade-Off in Hierarchical Probabilistic Models Using Higher-Order Feature Interactions

1 code implementation28 Jun 2019 Simon Luo, Mahito Sugiyama

However, it is well known that increasing the number of parameters also increases the complexity of the model which leads to a bias-variance trade-off.

Cannot find the paper you are looking for? You can Submit a new open access paper.