You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • ICML 2020 • Naoto Ohsaka, Tatsuya Matsuoka

We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices as a natural, promising generalization of DPPs.

no code implementations • 23 May 2023 • Naoto Ohsaka, Riku Togashi

Diversification of recommendation results is a promising approach for coping with the uncertainty associated with users' information needs.

no code implementations • 23 May 2023 • Naoto Ohsaka, Riku Togashi

Beyond accuracy, there are a variety of aspects to the quality of recommender systems, such as diversity, fairness, and robustness.

no code implementations • 28 Nov 2021 • Naoto Ohsaka, Tatsuya Matsuoka

(2) $\sum_S\det({\bf A}_{S, S})\det({\bf B}_{S, S})\det({\bf C}_{S, S})$ is NP-hard to approximate within a factor of $2^{O(|I|^{1-\epsilon})}$ or $2^{O(n^{1/\epsilon})}$ for any $\epsilon>0$, where $|I|$ is the input size and $n$ is the order of the input matrix.

no code implementations • 2 Sep 2021 • Naoto Ohsaka

As a corollary of the first result, we demonstrate that the normalizing constant for E-DPPs of any (fixed) constant exponent $p \geq \beta^{-1} = 10^{10^{13}}$ is $\textsf{NP}$-hard to approximate within a factor of $2^{\beta pn}$, which is in contrast to the case of $p \leq 1$ admitting a fully polynomial-time randomized approximation scheme.

no code implementations • 25 Feb 2021 • Tatsuya Matsuoka, Naoto Ohsaka

We consider determinantal point processes (DPPs) constrained by spanning trees.

no code implementations • 15 Jan 2021 • Tomoya Sakai, Naoto Ohsaka

The task is regarded as predictive optimization, but existing predictive optimization methods have not been extended to handling multiple domains.

no code implementations • NeurIPS 2015 • Naoto Ohsaka, Yuichi Yoshida

A $k$-submodular function is a generalization of a submodular function, where the input consists of $k$ disjoint subsets, instead of a single subset, of the domain. Many machine learning problems, including influence maximization with $k$ kinds of topics and sensor placement with $k$ kinds of sensors, can be naturally modeled as the problem of maximizing monotone $k$-submodular functions. In this paper, we give constant-factor approximation algorithms for maximizing monotone $k$-submodular functions subject to several size constraints. The running time of our algorithms are almost linear in the domain size. We experimentally demonstrate that our algorithms outperform baseline algorithms in terms of the solution quality.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.