no code implementations • ICLR 2019 • Nozomu Yoshinari, Kento Uchida, Shota Saito, Shinichi Shirakawa, Youhei Akimoto
The experimental results show that the proposed architecture search method is fast and can achieve comparable performance to the existing methods.
no code implementations • 23 Aug 2024 • Kento Uchida, Ryoki Hamano, Masahiro Nomura, Shota Saito, Shinichi Shirakawa
Discrete and mixed-variable optimization problems have appeared in several real-world applications.
no code implementations • 17 May 2024 • Kento Uchida, Ryoki Hamano, Masahiro Nomura, Shota Saito, Shinichi Shirakawa
This optimization setting is known as safe optimization and formulated as a specialized type of constrained optimization problem with constraints for safety functions.
1 code implementation • 16 May 2024 • Ryoki Hamano, Shota Saito, Masahiro Nomura, Kento Uchida, Shinichi Shirakawa
CatCMA updates the parameters of the joint probability distribution in the natural gradient direction.
1 code implementation • 22 Feb 2024 • Stephen Pasteris, Alberto Rumi, Maximilian Thiessen, Shota Saito, Atsushi Miyauchi, Fabio Vitale, Mark Herbster
We study the classic problem of prediction with expert advice under bandit feedback.
1 code implementation • 14 Jun 2023 • Shota Saito, Mark Herbster
We prove upper and lower bounds on this approximation and observe that it is exact when the graph is a tree.
no code implementations • 12 Jun 2023 • Yuta Nakahara, Shota Saito, Naoki Ichijo, Koki Kazama, Toshiyasu Matsushima
In the field of decision trees, most previous studies have difficulty ensuring the statistical optimality of a prediction of new data and suffer from overfitting because trees are usually used only to represent prediction functions to be constructed from given data.
no code implementations • 1 May 2023 • Yohei Watanabe, Kento Uchida, Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa
The margin correction has been applied to ($\mu/\mu_\mathrm{w}$,$\lambda$)-CMA-ES, while this paper introduces the margin correction into (1+1)-CMA-ES, an elitist version of CMA-ES.
1 code implementation • 19 Dec 2022 • Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa
If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.
no code implementations • 30 Aug 2022 • Shoma Shimizu, Takayuki Nishio, Shota Saito, Yoichi Hirose, Chen Yen-Hsiu, Shinichi Shirakawa
This paper proposes a neural architecture search (NAS) method for split computing.
no code implementations • 21 Jul 2022 • Yuhei Noda, Shota Saito, Shinichi Shirakawa
The proposed method allows us to obtain multiple architectures with different complexities in a single architecture search, resulting in reducing the search cost.
3 code implementations • 26 May 2022 • Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa
If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.
1 code implementation • 18 Mar 2022 • Shota Saito
For graph cut based spectral clustering, it is common to model real-valued data into graph by modeling pairwise similarities using kernel function.
no code implementations • 24 Jan 2022 • Yuta Nakahara, Shota Saito, Akira Kamatsuka, Toshiyasu Matsushima
The hierarchical and recursive expressive capability of rooted trees is applicable to represent statistical models in various areas, such as data compression, image processing, and machine learning.
no code implementations • 27 Sep 2021 • Yuta Nakahara, Shota Saito, Akira Kamatsuka, Toshiyasu Matsushima
Its parametric representation is suitable for calculating the properties of our distribution using recursive functions, such as the mode, expectation, and posterior distribution.
no code implementations • 15 Jul 2019 • Shota Saito, Shinichi Shirakawa
We focus on the probabilistic model-based dynamic neural network structure optimization that considers the probability distribution of structure parameters and simultaneously optimizes both the distribution parameters and connection weights based on gradient methods.
1 code implementation • 21 May 2019 • Youhei Akimoto, Shinichi Shirakawa, Nozomu Yoshinari, Kento Uchida, Shota Saito, Kouhei Nishida
It accepts arbitrary search space (widely-applicable) and enables to employ a gradient-based simultaneous optimization of weights and architecture (fast).
no code implementations • 18 Sep 2018 • Kouhei Nishida, Hernan Aguirre, Shota Saito, Shinichi Shirakawa, Youhei Akimoto
This paper proposes a parameterless BBDO algorithm based on information geometric optimization, a recent framework for black box optimization using stochastic natural gradient.
2 code implementations • 22 Nov 2017 • Shota Saito, Danilo P. Mandic, Hideyuki Suzuki
The proposed $p$-Laplacian is shown to outperform standard hypergraph Laplacians in the experiment on a hypergraph semi-supervised learning and normalized cut setting.