no code implementations • 19 Oct 2023 • Ryotaro Mitsuboshi, Kohei Hatano, Eiji Takimoto

Metarounding is an approach to convert an approximation algorithm for linear optimization over some combinatorial classes to an online linear optimization algorithm for the same class.

no code implementations • 28 Jun 2023 • Yaxiong Liu, Atsuyoshi Nakamura, Kohei Hatano, Eiji Takimoto

Then, we show the lower bound to the pure exploration in multi-armed bandits with low rank sequence.

no code implementations • 8 Jun 2023 • Yiping Tang, Kohei Hatano, Eiji Takimoto

Some previous work proposes to transform neural networks into equivalent Boolean expressions and apply verification techniques for characteristics of interest.

1 code implementation • 22 Sep 2022 • Ryotaro Mitsuboshi, Kohei Hatano, Eiji Takimoto

LPBoost rapidly converges to an $\epsilon$-approximate solution in practice, but it is known to take $\Omega(m)$ iterations in the worst case, where $m$ is the sample size.

no code implementations • 10 Dec 2020 • Yaxiong Liu, Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto

We consider a variant of online semi-definite programming problem (OSDP): The decision space consists of semi-definite matrices with bounded $\Gamma$-trace norm, which is a generalization of trace norm defined by a positive definite matrix $\Gamma.$ To solve this problem, we utilise the follow-the-regularized-leader algorithm with a $\Gamma$-dependent log-determinant regularizer.

no code implementations • 15 Jul 2020 • Yaxiong Liu, Kohei Hatano, Eiji Takimoto

The cost is the makespan if the norm is $L_\infty$-norm.

1 code implementation • 31 May 2020 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda

We propose a new formulation of Multiple-Instance Learning (MIL), in which a unit of data consists of a set of instances called a bag.

1 code implementation • 14 Nov 2019 • Daiki Suehiro, Eiji Takimoto

In this work, we focus on a particular problem formulation called Multiple-Instance Learning (MIL), and show that various learning problems including all the problems mentioned above with some of new problems can be reduced to MIL with theoretically guaranteed generalization bounds, where the reductions are established under a new reduction scheme we provide as a by-product.

no code implementations • 20 Nov 2018 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda

Classifiers based on a single shapelet are not sufficiently strong for certain applications.

no code implementations • 27 Oct 2017 • Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto

Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well.

no code implementations • 5 Sep 2017 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda

We consider binary classification problems using local features of objects.

no code implementations • 5 Dec 2013 • Nir Ailon, Kohei Hatano, Eiji Takimoto

Unfortunately, CombBand requires at each step an $n$-by-$n$ matrix permanent approximation to within improved accuracy as $T$ grows, resulting in a total running time that is super linear in $T$, making it impractical for large time horizons.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.