no code implementations • 14 Nov 2018 • Xiao He, Francesco Alesiani, Ammar Shaker
Scaling up MTL methods to problems with a tremendous number of tasks is a big challenge.
1 code implementation • 5 May 2020 • Shujian Yu, Ammar Shaker, Francesco Alesiani, Jose C. Principe
We propose a simple yet powerful test statistic to quantify the discrepancy between two conditional distributions.
no code implementations • 11 Sep 2020 • Francesco Alesiani, Shujian Yu, Ammar Shaker, Wenzhe Yin
Interpretable Multi-Task Learning can be expressed as learning a sparse graph of the task relationship based on the prediction performance of the learned models.
no code implementations • 11 Sep 2020 • Shujian Yu, Francesco Alesiani, Ammar Shaker, Wenzhe Yin
We present a novel methodology to jointly perform multi-task learning and infer intrinsic relationship among tasks by an interpretable and sparse graph.
no code implementations • 2 Nov 2020 • Ammar Shaker, Francesco Alesiani, Shujian Yu, Wenzhe Yin
This paper presents Bilevel Continual Learning (BiCL), a general framework for continual learning that fuses bilevel optimization and recent advances in meta-learning for deep neural networks.
no code implementations • 2 Nov 2020 • Ammar Shaker, Shujian Yu, Francesco Alesiani
In this paper, we propose a continual learning (CL) technique that is beneficial to sequential task learners by improving their retained accuracy and reducing catastrophic forgetting.
1 code implementation • 25 Jan 2021 • Shujian Yu, Francesco Alesiani, Xi Yu, Robert Jenssen, Jose C. Principe
Measuring the dependence of data plays a central role in statistics and machine learning.
1 code implementation • 12 Oct 2021 • Francesco Alesiani, Shujian Yu, Xi Yu
By learning minimum sufficient representations from training data, the information bottleneck (IB) approach has demonstrated its effectiveness to improve generalization in different AI applications.
Adversarial Robustness Out of Distribution (OOD) Detection +1
no code implementations • AAAI Workshop AdvML 2022 • Francesco Alesiani
We describe a class of gradient estimators for the combinatorial case which reduces the requirements in term of computation complexity; for the continuous variables case the gradient computation takes advantage of push-back approach (i. e. vector-jacobian product) for an efficient implementation.
1 code implementation • 31 May 2022 • Shujian Yu, Francesco Alesiani, Wenzhe Yin, Robert Jenssen, Jose C. Principe
Graph sparsification aims to reduce the number of edges of a graph while maintaining its structural properties.
no code implementations • 10 Jul 2022 • Bhushan Kotnis, Kiril Gashteovski, Julia Gastinger, Giuseppe Serra, Francesco Alesiani, Timo Sztyler, Ammar Shaker, Na Gong, Carolin Lawrence, Zhao Xu
With Human-Centric Research (HCR) we can steer research activities so that the research outcome is beneficial for human stakeholders, such as end users.
2 code implementations • 13 Oct 2022 • Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, Mathias Niepert
With those metrics we identify tasks which are challenging for recent ML methods and propose these tasks as future challenges for the community.
no code implementations • 28 Feb 2023 • Francesco Alesiani
Integrating bilevel mathematical programming within deep learning is thus an essential objective for the Machine Learning community.
2 code implementations • 27 Apr 2023 • Makoto Takamoto, Francesco Alesiani, Mathias Niepert
The experiments also show several advantages of CAPE, such as its increased ability to generalize to unseen PDE parameters without large increases inference time and parameter count.
no code implementations • 24 Sep 2023 • Henrik Christiansen, Federico Errica, Francesco Alesiani
In the case of alanine dipeptide, by tuning the only free parameter of our loss definition, we find a good correspondence between it and the autocorrelation times, resulting in a $>100$ fold speed up in optimization of simulation parameters compared to a grid-search.
no code implementations • 21 Oct 2023 • Francesco Alesiani, Shujian Yu, Mathias Niepert
Invariant risk minimization (IRM) is a recent proposal for discovering environment-invariant representations.
no code implementations • 3 Dec 2023 • Viktor Zaverkin, David Holzmüller, Henrik Christiansen, Federico Errica, Francesco Alesiani, Makoto Takamoto, Mathias Niepert, Johannes Kästner
Existing biased and unbiased MD simulations, however, are prone to miss either rare events or extrapolative regions -- areas of the configurational space where unreliable predictions are made.
1 code implementation • 27 Dec 2023 • Federico Errica, Henrik Christiansen, Viktor Zaverkin, Takashi Maruyama, Mathias Niepert, Francesco Alesiani
Long-range interactions are essential for the correct description of complex systems in many scientific fields.