no code implementations • 28 Dec 2022 • Tasuku Soma, Khashayar Gatmiry, Stefanie Jegelka
Distributionally robust optimization (DRO) can improve the robustness and fairness of learning methods.
no code implementations • NeurIPS 2020 • Nicholas Harvey, Christopher Liaw, Tasuku Soma
- For monotone submodular maximization subject to a matroid, we give an efficient algorithm which achieves a (1 − c/e − ε)-regret of O(√kT ln(n/k)) where n is the size of the ground set, k is the rank of the matroid, ε > 0 is a constant, and c is the average curvature.
no code implementations • NeurIPS 2020 • Shinji Ito, Shuichi Hirahara, Tasuku Soma, Yuichi Yoshida
We propose novel algorithms with first- and second-order regret bounds for adversarial linear bandits.
no code implementations • 14 Feb 2020 • Tasuku Soma, Yuichi Yoshida
For convex and Lipschitz loss functions, we show that our algorithm has $O(1/\sqrt{n})$-convergence to the optimal CVaR, where $n$ is the number of samples.
no code implementations • 27 Sep 2018 • Satoshi Hara, Koichi Ikeno, Tasuku Soma, Takanori Maehara
In this study, we formalize the feature attribution problem as a feature selection problem.
no code implementations • NeurIPS 2018 • Kaito Fujii, Tasuku Soma
In dictionary selection, several atoms are selected from finite candidates that successfully approximate given data points in the sparse representation.
1 code implementation • 19 Jun 2018 • Satoshi Hara, Kouichi Ikeno, Tasuku Soma, Takanori Maehara
In adversarial example, one seeks the smallest data perturbation that changes the model's output.
no code implementations • NeurIPS 2015 • Tasuku Soma, Yuichi Yoshida
We show that the generalized submodular cover problem can be applied to various problems and devise a bicriteria approximation algorithm.