Search Results for author: Masahiro Nomura

Found 20 papers, 13 papers with code

Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It

no code implementations23 Apr 2024 Yuta Saito, Masahiro Nomura

There has been a growing interest in off-policy evaluation in the literature such as recommender systems and personalized medicine.

Off-Policy Evaluation of Slate Bandit Policies via Optimizing Abstraction

1 code implementation3 Feb 2024 Haruka Kiyohara, Masahiro Nomura, Yuta Saito

The PseudoInverse (PI) estimator has been introduced to mitigate the variance issue by assuming linearity in the reward function, but this can result in significant bias as this assumption is hard-to-verify from observed data and is often substantially violated.

Marketing Multi-Armed Bandits +2

cmaes : A Simple yet Practical Python Library for CMA-ES

2 code implementations2 Feb 2024 Masahiro Nomura, Masashi Shibata

To address the need for an accessible yet potent tool in this domain, we developed cmaes, a simple and practical Python library for CMA-ES.

Transfer Learning

CMA-ES with Learning Rate Adaptation

1 code implementation29 Jan 2024 Masahiro Nomura, Youhei Akimoto, Isao Ono

The results show that the CMA-ES with the proposed learning rate adaptation works well for multimodal and/or noisy problems without extremely expensive learning rate tuning.

(1+1)-CMA-ES with Margin for Discrete and Mixed-Integer Problems

no code implementations1 May 2023 Yohei Watanabe, Kento Uchida, Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa

The margin correction has been applied to ($\mu/\mu_\mathrm{w}$,$\lambda$)-CMA-ES, while this paper introduces the margin correction into (1+1)-CMA-ES, an elitist version of CMA-ES.

CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?

2 code implementations7 Apr 2023 Masahiro Nomura, Youhei Akimoto, Isao Ono

The results demonstrate that, when the proposed learning rate adaptation is used, the CMA-ES with default population size works well on multimodal and/or noisy problems, without the need for extremely expensive learning rate tuning.

Towards Practical Preferential Bayesian Optimization with Skew Gaussian Processes

1 code implementation3 Feb 2023 Shion Takeno, Masahiro Nomura, Masayuki Karasuyama

This observation motivates us to improve the MCMC-based estimation for skew GP, for which we show the practical efficiency of Gibbs sampling and derive the low variance MC estimator.

Bayesian Optimization Computational Efficiency +1

Marginal Probability-Based Integer Handling for CMA-ES Tackling Single-and Multi-Objective Mixed-Integer Black-Box Optimization

1 code implementation19 Dec 2022 Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa

If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.

CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization

2 code implementations26 May 2022 Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa

If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization.

Fast Moving Natural Evolution Strategy for High-Dimensional Problems

2 code implementations27 Jan 2022 Masahiro Nomura, Isao Ono

In this work, we propose a new variant of natural evolution strategies (NES) for high-dimensional black-box optimization problems.

Vocal Bursts Intensity Prediction

Optimal Best Arm Identification in Two-Armed Bandits with a Fixed Budget under a Small Gap

no code implementations12 Jan 2022 Masahiro Kato, Kaito Ariu, Masaaki Imaizumi, Masahiro Nomura, Chao Qin

We show that a strategy following the Neyman allocation rule (Neyman, 1934) is asymptotically optimal when the gap between the expected rewards is small.

Causal Inference

Towards a Principled Learning Rate Adaptation for Natural Evolution Strategies

1 code implementation22 Nov 2021 Masahiro Nomura, Isao Ono

On the other hand, in problems that are difficult to optimize (e. g., multimodal functions), the proposed mechanism makes it possible to set a conservative learning rate when the estimation accuracy of the natural gradient seems to be low, which results in the robust and stable search.

Takeuchi's Information Criteria as Generalization Measures for DNNs Close to NTK Regime

no code implementations29 Sep 2021 Hiroki Naganuma, Taiji Suzuki, Rio Yokota, Masahiro Nomura, Kohta Ishikawa, Ikuro Sato

Generalization measures are intensively studied in the machine learning community for better modeling generalization gaps.

Hyperparameter Optimization

Natural Evolution Strategy for Unconstrained and Implicitly Constrained Problems with Ridge Structure

1 code implementation21 Aug 2021 Masahiro Nomura, Isao Ono

However, DX-NES-IC has a problem in that the moving speed of the probability distribution is slow on ridge structure.

Warm Starting CMA-ES for Hyperparameter Optimization

2 code implementations13 Dec 2020 Masahiro Nomura, Shuhei Watanabe, Youhei Akimoto, Yoshihiko Ozaki, Masaki Onishi

Hyperparameter optimization (HPO), formulated as black-box optimization (BBO), is recognized as essential for automation and high performance of machine learning approaches.

Bayesian Optimization Hyperparameter Optimization +1

Multi-Source Unsupervised Hyperparameter Optimization

no code implementations28 Sep 2020 Masahiro Nomura, Yuta Saito

How can we conduct efficient hyperparameter optimization for a completely new task?

Hyperparameter Optimization

Simple and Scalable Parallelized Bayesian Optimization

no code implementations24 Jun 2020 Masahiro Nomura

In recent years, leveraging parallel and distributed computational resources has become essential to solve problems of high computational cost.

Bayesian Optimization BIG-bench Machine Learning +1

Efficient Hyperparameter Optimization under Multi-Source Covariate Shift

2 code implementations18 Jun 2020 Masahiro Nomura, Yuta Saito

This assumption is, however, often violated in uncertain real-world applications, which motivates the study of learning under covariate shift.

Bayesian Optimization Hyperparameter Optimization

A Simple Heuristic for Bayesian Optimization with A Low Budget

no code implementations18 Nov 2019 Masahiro Nomura, Kenshi Abe

The aim of black-box optimization is to optimize an objective function within the constraints of a given evaluation budget.

Bayesian Optimization Hyperparameter Optimization

Towards Resolving Propensity Contradiction in Offline Recommender Learning

1 code implementation16 Oct 2019 Yuta Saito, Masahiro Nomura

We study offline recommender learning from explicit rating feedback in the presence of selection bias.

Selection bias Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.