Search Results for author: Taira Tsuchiya

Found 13 papers, 1 papers with code

Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds

no code implementations1 Mar 2024 Shinji Ito, Taira Tsuchiya, Junya Honda

Follow-The-Regularized-Leader (FTRL) is known as an effective and versatile approach in online learning, where appropriate choice of the learning rate is crucial for smaller regret.

Decision Making Multi-Armed Bandits

Fast Rates in Online Convex Optimization by Exploiting the Curvature of Feasible Sets

no code implementations20 Feb 2024 Taira Tsuchiya, Shinji Ito

We first prove that if an optimal decision is on the boundary of a feasible set and the gradient of an underlying loss function is non-zero, then the algorithm achieves a regret upper bound of $O(\rho \log T)$ in stochastic environments.

Online Control of Linear Systems with Unbounded and Degenerate Noise

no code implementations15 Feb 2024 Kaito Ito, Taira Tsuchiya

Moreover, when the costs are strongly convex, we establish an $ O({\rm poly} (\log T)) $ regret bound without the assumption that noise covariance is non-degenerate, which has been required in the literature.

Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring

no code implementations13 Feb 2024 Taira Tsuchiya, Shinji Ito, Junya Honda

This development allows us to significantly improve the existing regret bounds of best-of-both-worlds (BOBW) algorithms, which achieves nearly optimal bounds both in stochastic and adversarial environments.

Adversarial Robustness Decision Making

Online Structured Prediction with Fenchel--Young Losses and Improved Surrogate Regret for Online Multiclass Classification with Logistic Loss

no code implementations13 Feb 2024 Shinsaku Sakaue, Han Bao, Taira Tsuchiya, Taihei Oki

We extend the exploit-the-surrogate-gap framework to online structured prediction with \emph{Fenchel--Young losses}, a large family of surrogate losses including the logistic loss for multiclass classification, obtaining finite surrogate regret bounds in various structured prediction problems.

Classification Structured Prediction

Best-of-Both-Worlds Algorithms for Partial Monitoring

no code implementations29 Jul 2022 Taira Tsuchiya, Shinji Ito, Junya Honda

This study considers the partial monitoring problem with $k$-actions and $d$-outcomes and provides the first best-of-both-worlds algorithms, whose regrets are favorably bounded both in the stochastic and adversarial regimes.

Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret Bounds

no code implementations14 Jun 2022 Shinji Ito, Taira Tsuchiya, Junya Honda

In fact, they have provided a stochastic MAB algorithm with gap-variance-dependent regret bounds of $O(\sum_{i: \Delta_i>0} (\frac{\sigma_i^2}{\Delta_i} + 1) \log T )$ for loss variance $\sigma_i^2$ of arm $i$.

Minimax Optimal Algorithms for Fixed-Budget Best Arm Identification

1 code implementation9 Jun 2022 Junpei Komiyama, Taira Tsuchiya, Junya Honda

We introduce two rates, $R^{\mathrm{go}}$ and $R^{\mathrm{go}}_{\infty}$, corresponding to lower bounds on the probability of misidentification, each of which is associated with a proposed algorithm.

Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs

no code implementations2 Jun 2022 Shinji Ito, Taira Tsuchiya, Junya Honda

As Alon et al. [2015] have shown, tight regret bounds depend on the structure of the feedback graph: strongly observable graphs yield minimax regret of $\tilde{\Theta}( \alpha^{1/2} T^{1/2} )$, while weakly observable graphs induce minimax regret of $\tilde{\Theta}( \delta^{1/3} T^{2/3} )$, where $\alpha$ and $\delta$, respectively, represent the independence number of the graph and the domination number of a certain portion of the graph.

Open-Ended Question Answering

Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring

no code implementations NeurIPS 2020 Taira Tsuchiya, Junya Honda, Masashi Sugiyama

We investigate finite stochastic partial monitoring, which is a general model for sequential learning with limited feedback.

Decision Making Thompson Sampling

Cannot find the paper you are looking for? You can Submit a new open access paper.