Search Results for author: Joe Suzuki

Found 10 papers, 1 papers with code

A Theoretical Analysis of the BDeu Scores in Bayesian Network Structure Learning

no code implementations15 Jul 2016 Joe Suzuki

In Bayesian network structure learning (BNSL), we need the prior probability over structures and parameters.

Causal Discovery in a Binary Exclusive-or Skew Acyclic Model: BExSAM

no code implementations22 Jan 2014 Takanori Inazumi, Takashi Washio, Shohei Shimizu, Joe Suzuki, Akihiro Yamamoto, Yoshinobu Kawahara

Discovering causal relations among observed variables in a given data set is a major objective in studies of statistics and artificial intelligence.

Causal Discovery

Converting ADMM to a Proximal Gradient for Efficient Sparse Estimation

1 code implementation22 Apr 2021 Ryosuke Shimmura, Joe Suzuki

In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem.

Clustering

Efficient proximal gradient algorithms for joint graphical lasso

no code implementations16 Jul 2021 Jie Chen, Ryosuke Shimmura, Joe Suzuki

We consider learning an undirected graphical model from sparse data.

Dropout Drops Double Descent

no code implementations25 May 2023 Tian-Le Yang, Joe Suzuki

Our paper posits that the optimal test error, in terms of the dropout rate, shows a monotonic decrease in linear regression with increasing sample size.

regression

Functional Linear Non-Gaussian Acyclic Model for Causal Discovery

no code implementations17 Jan 2024 Tian-Le Yang, Kuang-Yao Lee, Kun Zhang, Joe Suzuki

To expand this concept, we extend the notion of variables to encompass vectors and even functions, leading to the Functional Linear Non-Gaussian Acyclic Model (Func-LiNGAM).

Causal Discovery EEG

Generalization of LiNGAM that allows confounding

no code implementations30 Jan 2024 Joe Suzuki, Tian-Le Yang

LiNGAM determines the variable order from cause to effect using additive noise models, but it faces challenges with confounding.

Learning under Singularity: An Information Criterion improving WBIC and sBIC

no code implementations20 Feb 2024 Lirui Liu, Joe Suzuki

It incorporates the empirical loss from the Widely Applicable Information Criterion (WAIC) to represent the goodness of fit to the statistical model, along with a penalty term similar to that of sBIC.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.