A Two-Stage Neural-Filter Pareto Front Extractor and the need for Benchmarking

29 Sep 2021  ·  Soumyajit Gupta, Gurpreet Singh, Matthew Lease ·

Pareto solutions are optimal trade-offs between multiple competing objectives over the feasible set satisfying imposed constraints. Fixed-point iterative strategies do not always converge and might only return one solution point per run. Consequently, multiple runs of a scalarization problem are required to retrieve a Pareto front, where all instances converge. Recently proposed Multi-Task Learning (MTL) solvers claim to achieve Pareto solutions combining Linear Scalarization and domain decomposition. We demonstrate key shortcomings of MTL solvers, that limit their usability for real-world applications. Issues include unjustified convexity assumptions on practical problems, incomplete and often wrong inferences on datasets that violate Pareto definition, and lack of proper benchmarking and verification. We propose a two stage Pareto framework: Hybrid Neural Pareto Front (HNPF) that is accurate and handles non-convex functions and constraints. The Stage-1 neural network efficiently extracts the \textit{weak} Pareto front, using Fritz-John Conditions (FJC) as the discriminator, with no assumptions of convexity on the objectives or constraints. An FJC guided diffusive manifold is used to bound the error between the true and the Stage-1 extracted \textit{weak} Pareto front. The Stage-2, low-cost Pareto filter then extracts the strong Pareto subset from this weak front. Numerical experiments demonstrates the accuracy and efficiency of our approach.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here