Deep Determinantal Point Processes

17 Nov 2018  ·  Mike Gartrell, Elvis Dohmatob, Jon Alberdi ·

Determinantal point processes (DPPs) have attracted significant attention as an elegant model that is able to capture the balance between quality and diversity within sets. DPPs are parameterized by a positive semi-definite kernel matrix. While DPPs have substantial expressive power, they are fundamentally limited by the parameterization of the kernel matrix and their inability to capture nonlinear interactions between items within sets. We present the deep DPP model as way to address these limitations, by using a deep feed-forward neural network to learn the kernel matrix. In addition to allowing us to capture nonlinear item interactions, the deep DPP also allows easy incorporation of item metadata into DPP learning. Since the learning target is the DPP kernel matrix, the deep DPP allows us to use existing DPP algorithms for efficient learning, sampling, and prediction. Through an evaluation on several real-world datasets, we show experimentally that the deep DPP can provide a considerable improvement in the predictive performance of DPPs, while also outperforming strong baseline models in many cases.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here