Search Results for author: Francesco Alesiani

Found 18 papers, 7 papers with code

PDEBENCH: An Extensive Benchmark for Scientific Machine Learning

2 code implementations13 Oct 2022 Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, Mathias Niepert

With those metrics we identify tasks which are challenging for recent ML methods and propose these tasks as future challenges for the community.

Learning Neural PDE Solvers with Parameter-Guided Channel Attention

2 code implementations27 Apr 2023 Makoto Takamoto, Francesco Alesiani, Mathias Niepert

The experiments also show several advantages of CAPE, such as its increased ability to generalize to unseen PDE parameters without large increases inference time and parameter count.

PDE Surrogate Modeling Weather Forecasting

Principle of Relevant Information for Graph Sparsification

1 code implementation31 May 2022 Shujian Yu, Francesco Alesiani, Wenzhe Yin, Robert Jenssen, Jose C. Principe

Graph sparsification aims to reduce the number of edges of a graph while maintaining its structural properties.

Multi-Task Learning

Gated Information Bottleneck for Generalization in Sequential Environments

1 code implementation12 Oct 2021 Francesco Alesiani, Shujian Yu, Xi Yu

By learning minimum sufficient representations from training data, the information bottleneck (IB) approach has demonstrated its effectiveness to improve generalization in different AI applications.

Adversarial Robustness Out of Distribution (OOD) Detection +1

Towards Interpretable Multi-Task Learning Using Bilevel Programming

no code implementations11 Sep 2020 Francesco Alesiani, Shujian Yu, Ammar Shaker, Wenzhe Yin

Interpretable Multi-Task Learning can be expressed as learning a sparse graph of the task relationship based on the prediction performance of the learned models.

Multi-Task Learning

Learning an Interpretable Graph Structure in Multi-Task Learning

no code implementations11 Sep 2020 Shujian Yu, Francesco Alesiani, Ammar Shaker, Wenzhe Yin

We present a novel methodology to jointly perform multi-task learning and infer intrinsic relationship among tasks by an interpretable and sparse graph.

Multi-Task Learning

Bilevel Continual Learning

no code implementations2 Nov 2020 Ammar Shaker, Francesco Alesiani, Shujian Yu, Wenzhe Yin

This paper presents Bilevel Continual Learning (BiCL), a general framework for continual learning that fuses bilevel optimization and recent advances in meta-learning for deep neural networks.

Bilevel Optimization Continual Learning +1

Modular-Relatedness for Continual Learning

no code implementations2 Nov 2020 Ammar Shaker, Shujian Yu, Francesco Alesiani

In this paper, we propose a continual learning (CL) technique that is beneficial to sequential task learners by improving their retained accuracy and reducing catastrophic forgetting.

Continual Learning

BiGrad: Differentiating through Bilevel Optimization Programming

no code implementations AAAI Workshop AdvML 2022 Francesco Alesiani

We describe a class of gradient estimators for the combinatorial case which reduces the requirements in term of computation complexity; for the continuous variables case the gradient computation takes advantage of push-back approach (i. e. vector-jacobian product) for an efficient implementation.

BIG-bench Machine Learning Bilevel Optimization +1

Human-Centric Research for NLP: Towards a Definition and Guiding Questions

no code implementations10 Jul 2022 Bhushan Kotnis, Kiril Gashteovski, Julia Gastinger, Giuseppe Serra, Francesco Alesiani, Timo Sztyler, Ammar Shaker, Na Gong, Carolin Lawrence, Zhao Xu

With Human-Centric Research (HCR) we can steer research activities so that the research outcome is beneficial for human stakeholders, such as end users.

Implicit Bilevel Optimization: Differentiating through Bilevel Optimization Programming

no code implementations28 Feb 2023 Francesco Alesiani

Integrating bilevel mathematical programming within deep learning is thus an essential objective for the Machine Learning community.

Bilevel Optimization Privacy Preserving

Self-Tuning Hamiltonian Monte Carlo for Accelerated Sampling

no code implementations24 Sep 2023 Henrik Christiansen, Federico Errica, Francesco Alesiani

In the case of alanine dipeptide, by tuning the only free parameter of our loss definition, we find a good correspondence between it and the autocorrelation times, resulting in a $>100$ fold speed up in optimization of simulation parameters compared to a grid-search.

Continual Invariant Risk Minimization

no code implementations21 Oct 2023 Francesco Alesiani, Shujian Yu, Mathias Niepert

Invariant risk minimization (IRM) is a recent proposal for discovering environment-invariant representations.

Continual Learning

Uncertainty-biased molecular dynamics for learning uniformly accurate interatomic potentials

no code implementations3 Dec 2023 Viktor Zaverkin, David Holzmüller, Henrik Christiansen, Federico Errica, Francesco Alesiani, Makoto Takamoto, Mathias Niepert, Johannes Kästner

Existing biased and unbiased MD simulations, however, are prone to miss either rare events or extrapolative regions -- areas of the configurational space where unreliable predictions are made.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.