Search Results for author: Shinsaku Sakaue

Found 12 papers, 2 papers with code

Online Structured Prediction with Fenchel--Young Losses and Improved Surrogate Regret for Online Multiclass Classification with Logistic Loss

no code implementations13 Feb 2024 Shinsaku Sakaue, Han Bao, Taira Tsuchiya, Taihei Oki

We extend the exploit-the-surrogate-gap framework to online structured prediction with \emph{Fenchel--Young losses}, a large family of surrogate losses including the logistic loss for multiclass classification, obtaining finite surrogate regret bounds in various structured prediction problems.

Classification Structured Prediction

Data-Driven Projection for Reducing Dimensionality of Linear Programs: Generalization Bound and Learning Methods

no code implementations1 Sep 2023 Shinsaku Sakaue, Taihei Oki

On the theoretical side, a natural question is: how much data is sufficient to ensure the quality of recovered solutions?

Generalization Bounds

Rethinking Warm-Starts with Predictions: Learning Predictions Close to Sets of Optimal Solutions for Faster $\text{L}$-/$\text{L}^\natural$-Convex Function Minimization

no code implementations2 Feb 2023 Shinsaku Sakaue, Taihei Oki

The main technical difficulty lies in learning predictions that are provably close to sets of all optimal solutions, for which we present an online-gradient-descent-based method.

Improved Generalization Bound and Learning of Sparsity Patterns for Data-Driven Low-Rank Approximation

no code implementations17 Sep 2022 Shinsaku Sakaue, Taihei Oki

Specifically, for rank-$k$ approximation using an $m \times n$ learned sketching matrix with $s$ non-zeros in each column, they proved an $\tilde{\mathrm{O}}(nsm)$ bound on the \emph{fat shattering dimension} ($\tilde{\mathrm{O}}$ hides logarithmic factors).

Lazy and Fast Greedy MAP Inference for Determinantal Point Process

1 code implementation13 Jun 2022 Shinichi Hemmi, Taihei Oki, Shinsaku Sakaue, Kaito Fujii, Satoru Iwata

One classical and practical method is the lazy greedy algorithm, which is applicable to general submodular function maximization, while a recent fast greedy algorithm based on the Cholesky factorization is more efficient for DPP MAP inference.

Point Processes

Sample Complexity of Learning Heuristic Functions for Greedy-Best-First and A* Search

no code implementations20 May 2022 Shinsaku Sakaue, Taihei Oki

Motivated by this emerging approach, we study the sample complexity of learning heuristic functions for GBFS and A*.

Discrete-Convex-Analysis-Based Framework for Warm-Starting Algorithms with Predictions

no code implementations20 May 2022 Shinsaku Sakaue, Taihei Oki

Augmenting algorithms with learned predictions is a promising approach for going beyond worst-case bounds.

Differentiable Equilibrium Computation with Decision Diagrams for Stackelberg Models of Combinatorial Congestion Games

1 code implementation NeurIPS 2021 Shinsaku Sakaue, Kengo Nakamura

We address Stackelberg models of combinatorial congestion games (CCGs); we aim to optimize the parameters of CCGs so that the selfish behavior of non-atomic players attains desirable equilibria.

Differentiable Greedy Submodular Maximization: Guarantees, Gradient Estimators, and Applications

no code implementations6 May 2020 Shinsaku Sakaue

Motivated by, e. g., sensitivity analysis and end-to-end learning, the demand for differentiable optimization algorithms has been significantly increasing.

Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint

no code implementations17 Feb 2020 Yoichi Chikahara, Shinsaku Sakaue, Akinori Fujino, Hisashi Kashima

To avoid restrictive functional assumptions, we define the {\it probability of individual unfairness} (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.

Fairness

Beyond Adaptive Submodularity: Approximation Guarantees of Greedy Policy with Adaptive Submodularity Ratio

no code implementations24 Apr 2019 Kaito Fujii, Shinsaku Sakaue

We propose a new concept named adaptive submodularity ratio to study the greedy policy for sequential decision making.

Decision Making feature selection +1

Provable Fast Greedy Compressive Summarization with Any Monotone Submodular Function

no code implementations NAACL 2018 Shinsaku Sakaue, Tsutomu Hirao, Masaaki Nishino, Masaaki Nagata

This approach is known to have three advantages: its applicability to many useful submodular objective functions, the efficiency of the greedy algorithm, and the provable performance guarantee.

Document Summarization Extractive Summarization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.