no code implementations • 13 Feb 2024 • Shinsaku Sakaue, Han Bao, Taira Tsuchiya, Taihei Oki
We extend the exploit-the-surrogate-gap framework to online structured prediction with \emph{Fenchel--Young losses}, a large family of surrogate losses including the logistic loss for multiclass classification, obtaining finite surrogate regret bounds in various structured prediction problems.
no code implementations • 1 Sep 2023 • Shinsaku Sakaue, Taihei Oki
On the theoretical side, a natural question is: how much data is sufficient to ensure the quality of recovered solutions?
no code implementations • 2 Feb 2023 • Shinsaku Sakaue, Taihei Oki
The main technical difficulty lies in learning predictions that are provably close to sets of all optimal solutions, for which we present an online-gradient-descent-based method.
no code implementations • 17 Sep 2022 • Shinsaku Sakaue, Taihei Oki
Specifically, for rank-$k$ approximation using an $m \times n$ learned sketching matrix with $s$ non-zeros in each column, they proved an $\tilde{\mathrm{O}}(nsm)$ bound on the \emph{fat shattering dimension} ($\tilde{\mathrm{O}}$ hides logarithmic factors).
1 code implementation • 13 Jun 2022 • Shinichi Hemmi, Taihei Oki, Shinsaku Sakaue, Kaito Fujii, Satoru Iwata
One classical and practical method is the lazy greedy algorithm, which is applicable to general submodular function maximization, while a recent fast greedy algorithm based on the Cholesky factorization is more efficient for DPP MAP inference.
no code implementations • 20 May 2022 • Shinsaku Sakaue, Taihei Oki
Motivated by this emerging approach, we study the sample complexity of learning heuristic functions for GBFS and A*.
no code implementations • 20 May 2022 • Shinsaku Sakaue, Taihei Oki
Augmenting algorithms with learned predictions is a promising approach for going beyond worst-case bounds.
1 code implementation • NeurIPS 2021 • Shinsaku Sakaue, Kengo Nakamura
We address Stackelberg models of combinatorial congestion games (CCGs); we aim to optimize the parameters of CCGs so that the selfish behavior of non-atomic players attains desirable equilibria.
no code implementations • 6 May 2020 • Shinsaku Sakaue
Motivated by, e. g., sensitivity analysis and end-to-end learning, the demand for differentiable optimization algorithms has been significantly increasing.
no code implementations • 17 Feb 2020 • Yoichi Chikahara, Shinsaku Sakaue, Akinori Fujino, Hisashi Kashima
To avoid restrictive functional assumptions, we define the {\it probability of individual unfairness} (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.
no code implementations • 24 Apr 2019 • Kaito Fujii, Shinsaku Sakaue
We propose a new concept named adaptive submodularity ratio to study the greedy policy for sequential decision making.
no code implementations • NAACL 2018 • Shinsaku Sakaue, Tsutomu Hirao, Masaaki Nishino, Masaaki Nagata
This approach is known to have three advantages: its applicability to many useful submodular objective functions, the efficiency of the greedy algorithm, and the provable performance guarantee.