no code implementations • 22 Feb 2023 • Pierre-Alexandre Kamienny, Guillaume Lample, Sylvain Lamprier, Marco Virgolin
Symbolic regression (SR) is the problem of learning a symbolic expression from numerical data.
no code implementations • 3 Jul 2022 • Marco Virgolin, Solon P. Pissis
Symbolic regression (SR) is the task of learning a model of data in the form of a mathematical expression.
no code implementations • 26 Apr 2022 • Marco Virgolin, Peter A. N. Bosman
We find that coefficient mutation can help re-discovering the underlying equation by a substantial amount, but only when no noise is added to the target variable.
no code implementations • 5 Apr 2022 • Marco Virgolin, Eric Medvet, Tanja Alderliesten, Peter A. N. Bosman
Interpretability can be critical for the safe and responsible use of machine learning models in high-stakes applications.
1 code implementation • 1 Mar 2022 • Thomas Uriot, Marco Virgolin, Tanja Alderliesten, Peter Bosman
We find that various GP methods can be competitive with state-of-the-art DR algorithms and that they have the potential to produce interpretable DR mappings.
no code implementations • 14 Feb 2022 • Dazhuang Liu, Marco Virgolin, Tanja Alderliesten, Peter A. N. Bosman
Genetic programming (GP) is one of the best approaches today to discover symbolic regression models.
no code implementations • 10 Feb 2022 • Marco Virgolin, Andrea De Lorenzo, Tanja Alderliesten, Peter A. N. Bosman
Our results indicate that adult data can be considered to be a meaningful augmentation of pediatric data for the recognition of emotional facial expression in children, and open up the possibility for other applications of contrastive learning to improve pediatric care by complementing data of children with that of adults.
no code implementations • 7 Feb 2022 • Mattias Wahde, Marco Virgolin
In this chapter, we provide a review of conversational agents (CAs), discussing chatbots, intended for casual conversation with a user, as well as task-oriented agents that generally engage in discussions intended to reach one or several specific goals, often (but not always) within a specific domain.
2 code implementations • 22 Jan 2022 • Marco Virgolin, Saverio Fracaros
Since CEs typically prescribe a sparse form of intervention (i. e., only a subset of the features should be changed), we study the effect of addressing robustness separately for the features that are recommended to be changed and those that are not.
no code implementations • 11 Sep 2021 • Arkadiy Dushatskiy, Marco Virgolin, Anton Bouter, Dirk Thierens, Peter A. N. Bosman
When it comes to solving optimization problems with evolutionary algorithms (EAs) in a reliable and scalable manner, detecting and exploiting linkage information, i. e., dependencies between variables, can be key.
no code implementations • 31 Aug 2021 • Mattias Wahde, Marco Virgolin
In this position paper, we present five key principles, namely interpretability, inherent capability to explain, independent data, interactive learning, and inquisitiveness, for the development of conversational AI that, unlike the currently popular black box approaches, is transparent and accountable.
4 code implementations • 29 Jul 2021 • William La Cava, Patryk Orzechowski, Bogdan Burlacu, Fabrício Olivetti de França, Marco Virgolin, Ying Jin, Michael Kommenda, Jason H. Moore
We assess 14 symbolic regression methods and 7 machine learning methods on a set of 252 diverse regression problems.
1 code implementation • 13 Apr 2021 • Marco Virgolin, Andrea De Lorenzo, Francesca Randone, Eric Medvet, Mattias Wahde
The latter is estimated by a neural network that is trained concurrently to the evolution using the feedback of the user, which is collected using uncertainty-based active learning.
3 code implementations • 13 Sep 2020 • Marco Virgolin
Experimental comparisons on classification and regression tasks taken and reproduced from prior studies show that our algorithm fares very well against state-of-the-art ensemble and non-ensemble GP algorithms.
3 code implementations • 23 Apr 2020 • Marco Virgolin, Andrea De Lorenzo, Eric Medvet, Francesca Randone
We show that it is instead possible to take a meta-learning approach: an ML model of non-trivial Proxies of Human Interpretability (PHIs) can be learned from human feedback, then this model can be incorporated within an ML training process to directly optimize for interpretability.
no code implementations • 9 Sep 2019 • Marco Virgolin, Ziyuan Wang, Tanja Alderliesten, Peter A. N. Bosman
To assess the effects of radiation therapy, treatment plans are typically simulated on phantoms, i. e., virtual surrogates of patient anatomy.
no code implementations • 4 Jul 2019 • Marco Virgolin, Tanja Alderliesten, Peter A. N. Bosman
In this article, we assess to what extent GP still performs favorably at feature construction when constructing features that are (1) Of small-enough number, to enable visualization of the behavior of the ML model; (2) Of small-enough size, to enable interpretability of the features themselves; (3) Of sufficient informative power, to retain or even improve the performance of the ML algorithm.
1 code implementation • 3 Apr 2019 • Marco Virgolin, Tanja Alderliesten, Cees Witteveen, Peter A. N. Bosman
We show that the non-uniformity in the distribution of the genotype in GP populations negatively biases LL, and propose a method to correct for this.