Learning Optimal and Near-Optimal Lexicographic Preference Lists

19 Sep 2019  ·  Ahmed Moussa, Xudong Liu ·

We consider learning problems of an intuitive and concise preference model, called lexicographic preference lists (LP-lists). Given a set of examples that are pairwise ordinal preferences over a universe of objects built of attributes of discrete values, we want to learn (1) an optimal LP-list that decides the maximum number of these examples, or (2) a near-optimal LP-list that decides as many examples as it can. To this end, we introduce a dynamic programming based algorithm and a genetic algorithm for these two learning problems, respectively. Furthermore, we empirically demonstrate that the sub-optimal models computed by the genetic algorithm very well approximate the de facto optimal models computed by our dynamic programming based algorithm, and that the genetic algorithm outperforms the baseline greedy heuristic with higher accuracy predicting new preferences.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here