Search Results for author: Junya Honda

Found 29 papers, 8 papers with code

Best-of-Both-Worlds Algorithms for Partial Monitoring

no code implementations29 Jul 2022 Taira Tsuchiya, Shinji Ito, Junya Honda

To be more specific, we show that for non-degenerate locally observable games, the regret in the stochastic regime is bounded by $O(k^3 m^2 \log(T) \log(k_{\Pi} T) / \Delta_{\mathrm{\min}})$ and in the adversarial regime by $O(k^{2/3} m \sqrt{T \log(T) \log k_{\Pi}})$, where $T$ is the number of rounds, $m$ is the maximum number of distinct observations per action, $\Delta_{\min}$ is the minimum optimality gap, and $k_{\Pi}$ is the number of Pareto optimal actions.

online learning

Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret Bounds

no code implementations14 Jun 2022 Shinji Ito, Taira Tsuchiya, Junya Honda

In fact, they have provided a stochastic MAB algorithm with gap-variance-dependent regret bounds of $O(\sum_{i: \Delta_i>0} (\frac{\sigma_i^2}{\Delta_i} + 1) \log T )$ for loss variance $\sigma_i^2$ of arm $i$.

Globally Optimal Algorithms for Fixed-Budget Best Arm Identification

no code implementations9 Jun 2022 Junpei Komiyama, Taira Tsuchiya, Junya Honda

We consider the fixed-budget best arm identification problem where the goal is to find the arm of the largest mean with a fixed number of samples.

The Survival Bandit Problem

no code implementations7 Jun 2022 Charles Riou, Junya Honda, Masashi Sugiyama

We study the survival bandit problem, a variant of the multi-armed bandit problem introduced in an open problem by Perotto et al. (2019), with a constraint on the cumulative reward; at each time step, the agent receives a (possibly negative) reward and if the cumulative reward becomes lower than a prespecified threshold, the procedure stops, and this phenomenon is called ruin.

Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs

no code implementations2 Jun 2022 Shinji Ito, Taira Tsuchiya, Junya Honda

As Alon et al. [2015] have shown, tight regret bounds depend on the structure of the feedback graph: \textit{strongly observable} graphs yield minimax regret of $\tilde{\Theta}( \alpha^{1/2} T^{1/2} )$, while \textit{weakly observable} graphs induce minimax regret of $\tilde{\Theta}( \delta^{1/3} T^{2/3} )$, where $\alpha$ and $\delta$, respectively, represent the independence number of the graph and the domination number of a certain portion of the graph.

online learning

Finite-time Analysis of Globally Nonstationary Multi-Armed Bandits

1 code implementation23 Jul 2021 Junpei Komiyama, Edouard Fouché, Junya Honda

We demonstrate that ADR-bandit has nearly optimal performance when the abrupt or global changes occur in a coordinated manner that we call global changes.

Multi-Armed Bandits

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

1 code implementation16 Jul 2021 Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama

In this paper, we consider the task of predicting $Y$ from $X$ when we have no paired data of them, but we have two separate, independent datasets of $X$ and $Y$ each observed with some mediating variable $U$, that is, we have two datasets $S_X = \{(X_i, U_i)\}$ and $S_Y = \{(U'_j, Y'_j)\}$.

Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring

no code implementations NeurIPS 2020 Taira Tsuchiya, Junya Honda, Masashi Sugiyama

We investigate finite stochastic partial monitoring, which is a general model for sequential learning with limited feedback.

Decision Making

Efficient Adaptive Experimental Design for Average Treatment Effect Estimation

no code implementations13 Feb 2020 Masahiro Kato, Takuya Ishihara, Junya Honda, Yusuke Narita

In adaptive experimental design, the experimenter is allowed to change the probability of assigning a treatment using past observations for estimating the ATE efficiently.

Experimental Design

Uncoupled Regression from Pairwise Comparison Data

1 code implementation NeurIPS 2019 Liyuan Xu, Junya Honda, Gang Niu, Masashi Sugiyama

We propose two practical methods for uncoupled regression from pairwise comparison data and show that the learned regression model converges to the optimal model with the optimal parametric convergence rate when the target variable distributes uniformly.


Learning from Positive and Unlabeled Data with a Selection Bias

1 code implementation ICLR 2019 Masahiro Kato, Takeshi Teshima, Junya Honda

However, this assumption is unrealistic in many instances of PU learning because it fails to capture the existence of a selection bias in the labeling process.

Selection bias

A Note on KL-UCB+ Policy for the Stochastic Bandit

no code implementations19 Mar 2019 Junya Honda

A classic setting of the stochastic K-armed bandit problem is considered in this note.

Polynomial-time Algorithms for Multiple-arm Identification with Full-bandit Feedback

no code implementations27 Feb 2019 Yuko Kuroki, Liyuan Xu, Atsushi Miyauchi, Junya Honda, Masashi Sugiyama

Based on our approximation algorithm, we propose novel bandit algorithms for the top-k selection problem, and prove that our algorithms run in polynomial time.

A Bad Arm Existence Checking Problem

no code implementations31 Jan 2019 Koji Tabata, Atsuyoshi Nakamura, Junya Honda, Tamiki Komatsuzaki

We study a bad arm existing checking problem in which a player's task is to judge whether a positive arm exists or not among given K arms by drawing as small number of arms as possible.

On the Calibration of Multiclass Classification with Rejection

1 code implementation NeurIPS 2019 Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama

First, we consider an approach based on simultaneous training of a classifier and a rejector, which achieves the state-of-the-art performance in the binary case.

Classification General Classification

Dueling Bandits with Qualitative Feedback

no code implementations14 Sep 2018 Liyuan Xu, Junya Honda, Masashi Sugiyama

We formulate and study a novel multi-armed bandit problem called the qualitative dueling bandit (QDB) problem, where an agent observes not numeric but qualitative feedback by pulling each arm.

Unsupervised Domain Adaptation Based on Source-guided Discrepancy

no code implementations11 Sep 2018 Seiichi Kuroki, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, Masashi Sugiyama

A previously proposed discrepancy that does not use the source domain labels requires high computational cost to estimate and may lead to a loose generalization error bound in the target domain.

Unsupervised Domain Adaptation

Nonconvex Optimization for Regression with Fairness Constraints

1 code implementation ICML 2018 Junpei Komiyama, Akiko Takeda, Junya Honda, Hajime Shimao

However, a fairness level as a constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer.


Position-based Multiple-play Bandit Problem with Unknown Position Bias

no code implementations NeurIPS 2017 Junpei Komiyama, Junya Honda, Akiko Takeda

Motivated by online advertising, we study a multiple-play multi-armed bandit problem with position bias that involves several slots and the latter slots yield fewer rewards.

Good Arm Identification via Bandit Feedback

no code implementations17 Oct 2017 Hideaki Kano, Junya Honda, Kentaro Sakamaki, Kentaro Matsuura, Atsuyoshi Nakamura, Masashi Sugiyama

We consider a novel stochastic multi-armed bandit problem called {\em good arm identification} (GAI), where a good arm is defined as an arm with expected reward greater than or equal to a given threshold.

Fully adaptive algorithm for pure exploration in linear bandits

no code implementations16 Oct 2017 Liyuan Xu, Junya Honda, Masashi Sugiyama

We propose the first fully-adaptive algorithm for pure exploration in linear bandits---the task to find the arm with the largest expected reward, which depends on an unknown parameter linearly.

Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm, and Computationally Efficient Algorithm

no code implementations5 May 2016 Junpei Komiyama, Junya Honda, Hiroshi Nakagawa

We study the K-armed dueling bandit problem, a variation of the standard stochastic bandit problem where the feedback is limited to relative comparisons of a pair of arms.

Regret Lower Bound and Optimal Algorithm in Finite Stochastic Partial Monitoring

no code implementations NeurIPS 2015 Junpei Komiyama, Junya Honda, Hiroshi Nakagawa

To show the optimality of PM-DMED with respect to the regret bound, we slightly modify the algorithm by introducing a hinge function (PM-DMED-Hinge).

Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem

1 code implementation8 Jun 2015 Junpei Komiyama, Junya Honda, Hisashi Kashima, Hiroshi Nakagawa

We study the $K$-armed dueling bandit problem, a variation of the standard stochastic bandit problem where the feedback is limited to relative comparisons of a pair of arms.

Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays

1 code implementation2 Jun 2015 Junpei Komiyama, Junya Honda, Hiroshi Nakagawa

Recently, Thompson sampling (TS), a randomized algorithm with a Bayesian spirit, has attracted much attention for its empirically excellent performance, and it is revealed to have an optimal regret bound in the standard single-play MAB problem.

Normal Bandits of Unknown Means and Variances: Asymptotic Optimality, Finite Horizon Regret Bounds, and a Solution to an Open Problem

no code implementations22 Apr 2015 Wesley Cowan, Junya Honda, Michael N. Katehakis

Consider the problem of sampling sequentially from a finite number of $N \geq 2$ populations, specified by random variables $X^i_k$, $ i = 1,\ldots , N,$ and $k = 1, 2, \ldots$; where $X^i_k$ denotes the outcome from population $i$ the $k^{th}$ time it is sampled.

Cannot find the paper you are looking for? You can Submit a new open access paper.