We introduce COCO, an open source platform for Comparing Continuous Optimizers in a black-box setting.
We present a novel, efficient method that generates locally continuous Pareto sets and Pareto fronts, which opens up the possibility of continuous analysis of Pareto optimal solutions in machine learning problems.
Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training.
FAIRNESS MULTIOBJECTIVE OPTIMIZATION MULTI-TARGET REGRESSION MULTI-TASK LEARNING SEMANTIC SEGMENTATION
Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization.
Most current state-of-the-art text-independent speaker verification systems take probabilistic linear discriminant analysis (PLDA) as their backend classifiers.
MULTIOBJECTIVE OPTIMIZATION TEXT-INDEPENDENT SPEAKER VERIFICATION
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto-set of solutions by minimizing the number of function evaluations.
BAYESIAN OPTIMISATION GLOBAL OPTIMIZATION MULTIOBJECTIVE OPTIMIZATION
A suitable approximate multiplier is then selected for each computing element from a library of approximate multipliers in such a way that (i) one approximate multiplier serves several layers, and (ii) the overall classification error and energy consumption are minimized.
We suggest a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions in a biobjective optimization problem.
We then conduct an in-depth analysis of quality evaluation indicators/methods and general situations in SBSE, which, together with the identified issues, enables us to codify a methodological guidance for selecting and using evaluation methods in different SBSE scenarios.
The goal of lossy data compression is to reduce the storage cost of a data set $X$ while retaining as much information as possible about something ($Y$) that you care about.