no code implementations • 29 Dec 2024 • Yuya Hikima, Akiko Takeda
Although these existing methods have theoretical convergence for optimization problems with decision-dependent uncertainty, they require strong assumptions about the function and distribution or exhibit large variances in their gradient estimators.
1 code implementation • 4 Jun 2024 • Andi Han, Jiaxiang Li, Wei Huang, Mingyi Hong, Akiko Takeda, Pratik Jawanpuria, Bamdev Mishra
In this work, we propose to parameterize the weights as a sum of low-rank and sparse matrices for pretraining, which we call SLTrain.
1 code implementation • 6 Feb 2024 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Akiko Takeda
Bilevel optimization has gained prominence in various applications.
no code implementations • 19 Mar 2022 • Kanji Sato, Akiko Takeda, Reiichiro Kawai, Taiji Suzuki
Gradient Langevin dynamics and a variety of its variants have attracted increasing attention owing to their convergence towards the global optimal solution, initially in the unconstrained convex framework while recently even in convex constrained non-convex problems.
no code implementations • NeurIPS 2021 • Ryo Sato, Mirai Tanaka, Akiko Takeda
Although application examples of multilevel optimization have already been discussed since the 1990s, the development of solution methods was almost limited to bilevel cases due to the difficulty of the problem.
no code implementations • 11 Mar 2021 • Yuto Mori, Atsushi Nitanda, Akiko Takeda
Model extraction attacks have become serious issues for service providers using machine learning.
1 code implementation • 31 May 2020 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda
We propose a new formulation of Multiple-Instance Learning (MIL), in which a unit of data consists of a set of instances called a bag.
no code implementations • 16 Feb 2020 • Hikaru Ogura, Akiko Takeda
However, MD quantifies not only discrimination but also explanatory bias which is the difference of outcomes justified by explanatory features.
no code implementations • 20 Nov 2018 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda
Classifiers based on a single shapelet are not sufficiently strong for certain applications.
1 code implementation • ICML 2018 • Junpei Komiyama, Akiko Takeda, Junya Honda, Hajime Shimao
However, a fairness level as a constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer.
1 code implementation • 15 Jun 2018 • Daniel Andrade, Akiko Takeda, Kenji Fukumizu
Even more severe, small insignificant partial correlations due to noise can dramatically change the clustering result when evaluating for example with the Bayesian Information Criteria (BIC).
no code implementations • 19 Apr 2018 • Tianxiang Liu, Ting Kei Pong, Akiko Takeda
Moreover, for a large class of loss functions and regularizers, the KL exponent of the corresponding potential function are shown to be 1/2, which implies that the pDCA$_e$ is locally linearly convergent when applied to these problems.
no code implementations • NeurIPS 2017 • Junpei Komiyama, Junya Honda, Akiko Takeda
Motivated by online advertising, we study a multiple-play multi-armed bandit problem with position bias that involves several slots and the latter slots yield fewer rewards.
no code implementations • 20 Nov 2017 • Matthew Norton, Akiko Takeda, Alexander Mafusalov
In this paper, we explore an optimistic, or best-case view of uncertainty and show that it can be a fruitful approach.
no code implementations • 16 Oct 2017 • Tianxiang Liu, Ting Kei Pong, Akiko Takeda
We consider a class of nonconvex nonsmooth optimization problems whose objective is the sum of a smooth function and a finite number of nonnegative proper closed possibly nonsmooth functions (whose proximal mappings are easy to compute), some of which are further composed with linear maps.
no code implementations • 5 Sep 2017 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda
We consider binary classification problems using local features of objects.
1 code implementation • NeurIPS 2017 • Song Liu, Akiko Takeda, Taiji Suzuki, Kenji Fukumizu
Density ratio estimation is a vital tool in both machine learning and statistical community.
no code implementations • 3 Sep 2014 • Takafumi Kanamori, Shuhei Fujiwara, Akiko Takeda
For learning parameters such as the regularization parameter in our algorithm, we derive a simple formula that guarantees the robustness of the classifier.
no code implementations • NeurIPS 2013 • Shinichi Nakajima, Akiko Takeda, S. Derin Babacan, Masashi Sugiyama, Ichiro Takeuchi
However, Bayesian learning is often obstructed by computational difficulty: the rigorous Bayesian learning is intractable in many models, and its variational Bayesian (VB) approximation is prone to suffer from local minima.