no code implementations • 28 Feb 2023 • Francesco Alesiani
Integrating bilevel mathematical programming within deep learning is thus an essential objective for the Machine Learning community.
1 code implementation • 13 Oct 2022 • Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, Mathias Niepert
With those metrics we identify tasks which are challenging for recent ML methods and propose these tasks as future challenges for the community.
no code implementations • 10 Jul 2022 • Bhushan Kotnis, Kiril Gashteovski, Julia Gastinger, Giuseppe Serra, Francesco Alesiani, Timo Sztyler, Ammar Shaker, Na Gong, Carolin Lawrence, Zhao Xu
With Human-Centric Research (HCR) we can steer research activities so that the research outcome is beneficial for human stakeholders, such as end users.
1 code implementation • 31 May 2022 • Shujian Yu, Francesco Alesiani, Wenzhe Yin, Robert Jenssen, Jose C. Principe
Graph sparsification aims to reduce the number of edges of a graph while maintaining its structural properties.
no code implementations • AAAI Workshop AdvML 2022 • Francesco Alesiani
We describe a class of gradient estimators for the combinatorial case which reduces the requirements in term of computation complexity; for the continuous variables case the gradient computation takes advantage of push-back approach (i. e. vector-jacobian product) for an efficient implementation.
1 code implementation • 12 Oct 2021 • Francesco Alesiani, Shujian Yu, Xi Yu
By learning minimum sufficient representations from training data, the information bottleneck (IB) approach has demonstrated its effectiveness to improve generalization in different AI applications.
Adversarial Robustness
Out of Distribution (OOD) Detection
+1
1 code implementation • 25 Jan 2021 • Shujian Yu, Francesco Alesiani, Xi Yu, Robert Jenssen, Jose C. Principe
Measuring the dependence of data plays a central role in statistics and machine learning.
no code implementations • 1 Jan 2021 • Francesco Alesiani, Shujian Yu, Mathias Niepert
Empirical risk minimization can lead to poor generalization behaviour on unseen environments if the learned model does not capture invariant feature represen- tations.
no code implementations • 2 Nov 2020 • Ammar Shaker, Shujian Yu, Francesco Alesiani
In this paper, we propose a continual learning (CL) technique that is beneficial to sequential task learners by improving their retained accuracy and reducing catastrophic forgetting.
no code implementations • 2 Nov 2020 • Ammar Shaker, Francesco Alesiani, Shujian Yu, Wenzhe Yin
This paper presents Bilevel Continual Learning (BiCL), a general framework for continual learning that fuses bilevel optimization and recent advances in meta-learning for deep neural networks.
no code implementations • 11 Sep 2020 • Shujian Yu, Francesco Alesiani, Ammar Shaker, Wenzhe Yin
We present a novel methodology to jointly perform multi-task learning and infer intrinsic relationship among tasks by an interpretable and sparse graph.
no code implementations • 11 Sep 2020 • Francesco Alesiani, Shujian Yu, Ammar Shaker, Wenzhe Yin
Interpretable Multi-Task Learning can be expressed as learning a sparse graph of the task relationship based on the prediction performance of the learned models.
1 code implementation • 5 May 2020 • Shujian Yu, Ammar Shaker, Francesco Alesiani, Jose C. Principe
We propose a simple yet powerful test statistic to quantify the discrepancy between two conditional distributions.
no code implementations • 14 Nov 2018 • Xiao He, Francesco Alesiani, Ammar Shaker
Scaling up MTL methods to problems with a tremendous number of tasks is a big challenge.