Search Results for author: Shota Saito

Found 19 papers, 8 papers with code

Probabilistic Model-Based Dynamic Architecture Search

no code implementations ICLR 2019 Nozomu Yoshinari, Kento Uchida, Shota Saito, Shinichi Shirakawa, Youhei Akimoto

The experimental results show that the proposed architecture search method is fast and can achieve comparable performance to the existing methods.

Image Classification Neural Architecture Search

CMA-ES for Discrete and Mixed-Variable Optimization on Sets of Points

no code implementations23 Aug 2024 Kento Uchida, Ryoki Hamano, Masahiro Nomura, Shota Saito, Shinichi Shirakawa

Discrete and mixed-variable optimization problems have appeared in several real-world applications.

CMA-ES for Safe Optimization

no code implementations17 May 2024 Kento Uchida, Ryoki Hamano, Masahiro Nomura, Shota Saito, Shinichi Shirakawa

This optimization setting is known as safe optimization and formulated as a specialized type of constrained optimization problem with constraints for safety functions.

Bayesian Optimization

Bandits with Abstention under Expert Advice

1 code implementation22 Feb 2024 Stephen Pasteris, Alberto Rumi, Maximilian Thiessen, Shota Saito, Atsushi Miyauchi, Fabio Vitale, Mark Herbster

We study the classic problem of prediction with expert advice under bandit feedback.

Multi-class Graph Clustering via Approximated Effective $p$-Resistance

1 code implementation14 Jun 2023 Shota Saito, Mark Herbster

We prove upper and lower bounds on this approximation and observe that it is exact when the graph is a tree.

Clustering Graph Clustering

Prediction Algorithms Achieving Bayesian Decision Theoretical Optimality Based on Decision Trees as Data Observation Processes

no code implementations12 Jun 2023 Yuta Nakahara, Shota Saito, Naoki Ichijo, Koki Kazama, Toshiyasu Matsushima

In the field of decision trees, most previous studies have difficulty ensuring the statistical optimality of a prediction of new data and suffer from overfitting because trees are usually used only to represent prediction functions to be constructed from given data.

(1+1)-CMA-ES with Margin for Discrete and Mixed-Integer Problems

no code implementations1 May 2023 Yohei Watanabe, Kento Uchida, Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa

The margin correction has been applied to ($\mu/\mu_\mathrm{w}$,$\lambda$)-CMA-ES, while this paper introduces the margin correction into (1+1)-CMA-ES, an elitist version of CMA-ES.

Marginal Probability-Based Integer Handling for CMA-ES Tackling Single-and Multi-Objective Mixed-Integer Black-Box Optimization

1 code implementation19 Dec 2022 Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa

If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.

Efficient Search of Multiple Neural Architectures with Different Complexities via Importance Sampling

no code implementations21 Jul 2022 Yuhei Noda, Shota Saito, Shinichi Shirakawa

The proposed method allows us to obtain multiple architectures with different complexities in a single architecture search, resulting in reducing the search cost.

Neural Architecture Search

CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization

3 code implementations26 May 2022 Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa

If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.

Hypergraph Modeling via Spectral Embedding Connection: Hypergraph Cut, Weighted Kernel $k$-means, and Heat Kernel

1 code implementation18 Mar 2022 Shota Saito

For graph cut based spectral clustering, it is common to model real-valued data into graph by modeling pairwise similarities using kernel function.

Clustering

Probability Distribution on Rooted Trees

no code implementations24 Jan 2022 Yuta Nakahara, Shota Saito, Akira Kamatsuka, Toshiyasu Matsushima

The hierarchical and recursive expressive capability of rooted trees is applicable to represent statistical models in various areas, such as data compression, image processing, and machine learning.

Data Compression

Probability Distribution on Full Rooted Trees

no code implementations27 Sep 2021 Yuta Nakahara, Shota Saito, Akira Kamatsuka, Toshiyasu Matsushima

Its parametric representation is suitable for calculating the properties of our distribution using recursive functions, such as the mode, expectation, and posterior distribution.

Data Compression Model Selection

Controlling Model Complexity in Probabilistic Model-Based Dynamic Optimization of Neural Network Structures

no code implementations15 Jul 2019 Shota Saito, Shinichi Shirakawa

We focus on the probabilistic model-based dynamic neural network structure optimization that considers the probability distribution of structure parameters and simultaneously optimizes both the distribution parameters and connection weights based on gradient methods.

Neural Architecture Search

Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search

1 code implementation21 May 2019 Youhei Akimoto, Shinichi Shirakawa, Nozomu Yoshinari, Kento Uchida, Shota Saito, Kouhei Nishida

It accepts arbitrary search space (widely-applicable) and enables to employ a gradient-based simultaneous optimization of weights and architecture (fast).

Image Classification Neural Architecture Search

Parameterless Stochastic Natural Gradient Method for Discrete Optimization and its Application to Hyper-Parameter Optimization for Neural Network

no code implementations18 Sep 2018 Kouhei Nishida, Hernan Aguirre, Shota Saito, Shinichi Shirakawa, Youhei Akimoto

This paper proposes a parameterless BBDO algorithm based on information geometric optimization, a recent framework for black box optimization using stochastic natural gradient.

Hypergraph $p$-Laplacian: A Differential Geometry View

2 code implementations22 Nov 2017 Shota Saito, Danilo P. Mandic, Hideyuki Suzuki

The proposed $p$-Laplacian is shown to outperform standard hypergraph Laplacians in the experiment on a hypergraph semi-supervised learning and normalized cut setting.

Cannot find the paper you are looking for? You can Submit a new open access paper.