no code implementations • 2 Oct 2024 • Kento Masui, Mayu Otani, Masahiro Nomura, Hideki Nakayama
Through a series of experiments and a user study, we show that our method can quickly transfer the style of an image without additional training.
no code implementations • 23 Aug 2024 • Kento Uchida, Ryoki Hamano, Masahiro Nomura, Shota Saito, Shinichi Shirakawa
Discrete and mixed-variable optimization problems have appeared in several real-world applications.
no code implementations • 20 Aug 2024 • Tatsuhiro Shimizu, Koichi Tanaka, Ren Kishimoto, Haruka Kiyohara, Masahiro Nomura, Yuta Saito
We explore off-policy evaluation and learning (OPE/L) in contextual combinatorial bandits (CCB), where a policy selects a subset in the action space.
1 code implementation • 24 Jun 2024 • Ryoki Hamano, Shinichi Shirakawa, Masahiro Nomura
While the rank-one update makes the covariance matrix to increase the likelihood of generating a solution in the direction of the evolution path, this idea has been difficult to formulate and interpret as a natural gradient method unlike the rank-$\mu$ update.
no code implementations • 17 May 2024 • Kento Uchida, Ryoki Hamano, Masahiro Nomura, Shota Saito, Shinichi Shirakawa
This optimization setting is known as safe optimization and formulated as a specialized type of constrained optimization problem with constraints for safety functions.
1 code implementation • 16 May 2024 • Ryoki Hamano, Shota Saito, Masahiro Nomura, Kento Uchida, Shinichi Shirakawa
CatCMA updates the parameters of the joint probability distribution in the natural gradient direction.
no code implementations • 23 Apr 2024 • Yuta Saito, Masahiro Nomura
There has been a growing interest in off-policy evaluation in the literature such as recommender systems and personalized medicine.
1 code implementation • 3 Feb 2024 • Haruka Kiyohara, Masahiro Nomura, Yuta Saito
The PseudoInverse (PI) estimator has been introduced to mitigate the variance issue by assuming linearity in the reward function, but this can result in significant bias as this assumption is hard-to-verify from observed data and is often substantially violated.
2 code implementations • 2 Feb 2024 • Masahiro Nomura, Masashi Shibata
To address the need for an accessible yet potent tool in this domain, we developed cmaes, a simple and practical Python library for CMA-ES.
2 code implementations • 29 Jan 2024 • Masahiro Nomura, Youhei Akimoto, Isao Ono
The results show that the CMA-ES with the proposed learning rate adaptation works well for multimodal and/or noisy problems without extremely expensive learning rate tuning.
no code implementations • 1 May 2023 • Yohei Watanabe, Kento Uchida, Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa
The margin correction has been applied to ($\mu/\mu_\mathrm{w}$,$\lambda$)-CMA-ES, while this paper introduces the margin correction into (1+1)-CMA-ES, an elitist version of CMA-ES.
2 code implementations • 7 Apr 2023 • Masahiro Nomura, Youhei Akimoto, Isao Ono
The results demonstrate that, when the proposed learning rate adaptation is used, the CMA-ES with default population size works well on multimodal and/or noisy problems, without the need for extremely expensive learning rate tuning.
1 code implementation • 3 Feb 2023 • Shion Takeno, Masahiro Nomura, Masayuki Karasuyama
This observation motivates us to improve the MCMC-based estimation for skew GP, for which we show the practical efficiency of Gibbs sampling and derive the low variance MC estimator.
1 code implementation • 19 Dec 2022 • Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa
If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.
3 code implementations • 26 May 2022 • Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa
If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.
2 code implementations • 27 Jan 2022 • Masahiro Nomura, Isao Ono
In this work, we propose a new variant of natural evolution strategies (NES) for high-dimensional black-box optimization problems.
no code implementations • 12 Jan 2022 • Masahiro Kato, Kaito Ariu, Masaaki Imaizumi, Masahiro Nomura, Chao Qin
We show that a strategy following the Neyman allocation rule (Neyman, 1934) is asymptotically optimal when the gap between the expected rewards is small.
1 code implementation • 22 Nov 2021 • Masahiro Nomura, Isao Ono
On the other hand, in problems that are difficult to optimize (e. g., multimodal functions), the proposed mechanism makes it possible to set a conservative learning rate when the estimation accuracy of the natural gradient seems to be low, which results in the robust and stable search.
no code implementations • 29 Sep 2021 • Hiroki Naganuma, Taiji Suzuki, Rio Yokota, Masahiro Nomura, Kohta Ishikawa, Ikuro Sato
Generalization measures are intensively studied in the machine learning community for better modeling generalization gaps.
1 code implementation • 21 Aug 2021 • Masahiro Nomura, Isao Ono
However, DX-NES-IC has a problem in that the moving speed of the probability distribution is slow on ridge structure.
2 code implementations • 13 Dec 2020 • Masahiro Nomura, Shuhei Watanabe, Youhei Akimoto, Yoshihiko Ozaki, Masaki Onishi
Hyperparameter optimization (HPO), formulated as black-box optimization (BBO), is recognized as essential for automation and high performance of machine learning approaches.
no code implementations • 28 Sep 2020 • Masahiro Nomura, Yuta Saito
How can we conduct efficient hyperparameter optimization for a completely new task?
no code implementations • 24 Jun 2020 • Masahiro Nomura
In recent years, leveraging parallel and distributed computational resources has become essential to solve problems of high computational cost.
2 code implementations • 18 Jun 2020 • Masahiro Nomura, Yuta Saito
This assumption is, however, often violated in uncertain real-world applications, which motivates the study of learning under covariate shift.
no code implementations • 18 Nov 2019 • Masahiro Nomura, Kenshi Abe
The aim of black-box optimization is to optimize an objective function within the constraints of a given evaluation budget.
1 code implementation • 16 Oct 2019 • Yuta Saito, Masahiro Nomura
We study offline recommender learning from explicit rating feedback in the presence of selection bias.