Search Results for author: Marco Virgolin

Found 18 papers, 7 papers with code

Symbolic Regression is NP-hard

no code implementations3 Jul 2022 Marco Virgolin, Solon P. Pissis

Symbolic regression (SR) is the task of learning a model of data in the form of a mathematical expression.

regression Symbolic Regression

Coefficient Mutation in the Gene-pool Optimal Mixing Evolutionary Algorithm for Symbolic Regression

no code implementations26 Apr 2022 Marco Virgolin, Peter A. N. Bosman

We find that coefficient mutation can help re-discovering the underlying equation by a substantial amount, but only when no noise is added to the target variable.

regression Symbolic Regression

On genetic programming representations and fitness functions for interpretable dimensionality reduction

1 code implementation1 Mar 2022 Thomas Uriot, Marco Virgolin, Tanja Alderliesten, Peter Bosman

We find that various GP methods can be competitive with state-of-the-art DR algorithms and that they have the potential to produce interpretable DR mappings.

Dimensionality Reduction

Adults as Augmentations for Children in Facial Emotion Recognition with Contrastive Learning

no code implementations10 Feb 2022 Marco Virgolin, Andrea De Lorenzo, Tanja Alderliesten, Peter A. N. Bosman

Our results indicate that adult data can be considered to be a meaningful augmentation of pediatric data for the recognition of emotional facial expression in children, and open up the possibility for other applications of contrastive learning to improve pediatric care by complementing data of children with that of adults.

Contrastive Learning Data Augmentation +1

Conversational Agents: Theory and Applications

no code implementations7 Feb 2022 Mattias Wahde, Marco Virgolin

In this chapter, we provide a review of conversational agents (CAs), discussing chatbots, intended for casual conversation with a user, as well as task-oriented agents that generally engage in discussions intended to reach one or several specific goals, often (but not always) within a specific domain.

On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations

2 code implementations22 Jan 2022 Marco Virgolin, Saverio Fracaros

Since CEs typically prescribe a sparse form of intervention (i. e., only a subset of the features should be changed), we study the effect of addressing robustness separately for the features that are recommended to be changed and those that are not.

counterfactual Counterfactual Explanation

Parameterless Gene-pool Optimal Mixing Evolutionary Algorithms

no code implementations11 Sep 2021 Arkadiy Dushatskiy, Marco Virgolin, Anton Bouter, Dirk Thierens, Peter A. N. Bosman

When it comes to solving optimization problems with evolutionary algorithms (EAs) in a reliable and scalable manner, detecting and exploiting linkage information, i. e., dependencies between variables, can be key.

Evolutionary Algorithms Management

The five Is: Key principles for interpretable and safe conversational AI

no code implementations31 Aug 2021 Mattias Wahde, Marco Virgolin

In this position paper, we present five key principles, namely interpretability, inherent capability to explain, independent data, interactive learning, and inquisitiveness, for the development of conversational AI that, unlike the currently popular black box approaches, is transparent and accountable.

Position

Model Learning with Personalized Interpretability Estimation (ML-PIE)

1 code implementation13 Apr 2021 Marco Virgolin, Andrea De Lorenzo, Francesca Randone, Eric Medvet, Mattias Wahde

The latter is estimated by a neural network that is trained concurrently to the evolution using the feedback of the user, which is collected using uncertainty-based active learning.

Active Learning

Genetic Programming is Naturally Suited to Evolve Bagging Ensembles

3 code implementations13 Sep 2020 Marco Virgolin

Experimental comparisons on classification and regression tasks taken and reproduced from prior studies show that our algorithm fares very well against state-of-the-art ensemble and non-ensemble GP algorithms.

Learning a Formula of Interpretability to Learn Interpretable Formulas

3 code implementations23 Apr 2020 Marco Virgolin, Andrea De Lorenzo, Eric Medvet, Francesca Randone

We show that it is instead possible to take a meta-learning approach: an ML model of non-trivial Proxies of Human Interpretability (PHIs) can be learned from human feedback, then this model can be incorporated within an ML training process to directly optimize for interpretability.

Meta-Learning regression +1

Machine learning for automatic construction of pseudo-realistic pediatric abdominal phantoms

no code implementations9 Sep 2019 Marco Virgolin, Ziyuan Wang, Tanja Alderliesten, Peter A. N. Bosman

To assess the effects of radiation therapy, treatment plans are typically simulated on phantoms, i. e., virtual surrogates of patient anatomy.

Anatomy BIG-bench Machine Learning +1

On Explaining Machine Learning Models by Evolving Crucial and Compact Features

no code implementations4 Jul 2019 Marco Virgolin, Tanja Alderliesten, Peter A. N. Bosman

In this article, we assess to what extent GP still performs favorably at feature construction when constructing features that are (1) Of small-enough number, to enable visualization of the behavior of the ML model; (2) Of small-enough size, to enable interpretability of the features themselves; (3) Of sufficient informative power, to retain or even improve the performance of the ML algorithm.

BIG-bench Machine Learning

Improving Model-based Genetic Programming for Symbolic Regression of Small Expressions

1 code implementation3 Apr 2019 Marco Virgolin, Tanja Alderliesten, Cees Witteveen, Peter A. N. Bosman

We show that the non-uniformity in the distribution of the genotype in GP populations negatively biases LL, and propose a method to correct for this.

regression Symbolic Regression

Cannot find the paper you are looking for? You can Submit a new open access paper.