Search Results for author: Naoto Ohsaka

Found 10 papers, 1 papers with code

On the (In)tractability of Computing Normalizing Constants for the Product of Determinantal Point Processes

no code implementations ICML 2020 Naoto Ohsaka, Tatsuya Matsuoka

We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices as a natural, promising generalization of DPPs.

Open-Ended Question Answering Point Processes

Fast and Examination-agnostic Reciprocal Recommendation in Matching Markets

no code implementations15 Jun 2023 Yoji Tomita, Riku Togashi, Yuriko Hashizume, Naoto Ohsaka

In addition, ensuring that recommendation opportunities do not disproportionately favor popular users is essential for the total number of matches and for fairness among users.

Fairness Recommendation Systems

Safe Collaborative Filtering

1 code implementation8 Jun 2023 Riku Togashi, Tatsushi Oka, Naoto Ohsaka, Tetsuro Morimura

Excellent tail performance is crucial for modern machine learning tasks, such as algorithmic fairness, class imbalance, and risk-sensitive decision making, as it ensures the effective handling of challenging samples within a dataset.

Collaborative Filtering Computational Efficiency +3

A Critical Reexamination of Intra-List Distance and Dispersion

no code implementations23 May 2023 Naoto Ohsaka, Riku Togashi

Diversification of recommendation results is a promising approach for coping with the uncertainty associated with users' information needs.

Curse of "Low" Dimensionality in Recommender Systems

no code implementations23 May 2023 Naoto Ohsaka, Riku Togashi

Beyond accuracy, there are a variety of aspects to the quality of recommender systems, such as diversity, fairness, and robustness.

Fairness Recommendation Systems

Computational Complexity of Normalizing Constants for the Product of Determinantal Point Processes

no code implementations28 Nov 2021 Naoto Ohsaka, Tatsuya Matsuoka

(2) $\sum_S\det({\bf A}_{S, S})\det({\bf B}_{S, S})\det({\bf C}_{S, S})$ is NP-hard to approximate within a factor of $2^{O(|I|^{1-\epsilon})}$ or $2^{O(n^{1/\epsilon})}$ for any $\epsilon>0$, where $|I|$ is the input size and $n$ is the order of the input matrix.

Open-Ended Question Answering Point Processes

Some Inapproximability Results of MAP Inference and Exponentiated Determinantal Point Processes

no code implementations2 Sep 2021 Naoto Ohsaka

As a corollary of the first result, we demonstrate that the normalizing constant for E-DPPs of any (fixed) constant exponent $p \geq \beta^{-1} = 10^{10^{13}}$ is $\textsf{NP}$-hard to approximate within a factor of $2^{\beta pn}$, which is in contrast to the case of $p \leq 1$ admitting a fully polynomial-time randomized approximation scheme.

Point Processes

Predictive Optimization with Zero-Shot Domain Adaptation

no code implementations15 Jan 2021 Tomoya Sakai, Naoto Ohsaka

The task is regarded as predictive optimization, but existing predictive optimization methods have not been extended to handling multiple domains.

Domain Adaptation

Monotone k-Submodular Function Maximization with Size Constraints

no code implementations NeurIPS 2015 Naoto Ohsaka, Yuichi Yoshida

A $k$-submodular function is a generalization of a submodular function, where the input consists of $k$ disjoint subsets, instead of a single subset, of the domain. Many machine learning problems, including influence maximization with $k$ kinds of topics and sensor placement with $k$ kinds of sensors, can be naturally modeled as the problem of maximizing monotone $k$-submodular functions. In this paper, we give constant-factor approximation algorithms for maximizing monotone $k$-submodular functions subject to several size constraints. The running time of our algorithms are almost linear in the domain size. We experimentally demonstrate that our algorithms outperform baseline algorithms in terms of the solution quality.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.