1 code implementation • 1 Dec 2022 • Ammar Shaker, Carolin Lawrence
With the rise of machine learning, survival analysis can be modeled as learning a function that maps studied patients to their survival times.
no code implementations • 10 Jul 2022 • Bhushan Kotnis, Kiril Gashteovski, Julia Gastinger, Giuseppe Serra, Francesco Alesiani, Timo Sztyler, Ammar Shaker, Na Gong, Carolin Lawrence, Zhao Xu
With Human-Centric Research (HCR) we can steer research activities so that the research outcome is beneficial for human stakeholders, such as end users.
no code implementations • 25 May 2022 • Sascha Saralajew, Ammar Shaker, Zhao Xu, Kiril Gashteovski, Bhushan Kotnis, Wiem Ben Rim, Jürgen Quittek, Carolin Lawrence
Inspired by the Turing test, we introduce a human-centric assessment framework where a leading domain expert accepts or rejects the solutions of an AI system and another domain expert.
no code implementations • ACL 2022 • Bhushan Kotnis, Kiril Gashteovski, Daniel Oñoro Rubio, Vanesa Rodriguez-Tembras, Ammar Shaker, Makoto Takamoto, Mathias Niepert, Carolin Lawrence
In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction.
no code implementations • 7 Aug 2021 • Ammar Shaker, Shujian Yu, Daniel Oñoro-Rubio
Feature similarity includes both the invariance of marginal distributions and the closeness of conditional distributions given the desired response $y$ (e. g., class labels).
no code implementations • 2 Nov 2020 • Ammar Shaker, Shujian Yu, Francesco Alesiani
In this paper, we propose a continual learning (CL) technique that is beneficial to sequential task learners by improving their retained accuracy and reducing catastrophic forgetting.
no code implementations • 2 Nov 2020 • Ammar Shaker, Francesco Alesiani, Shujian Yu, Wenzhe Yin
This paper presents Bilevel Continual Learning (BiCL), a general framework for continual learning that fuses bilevel optimization and recent advances in meta-learning for deep neural networks.
no code implementations • 11 Sep 2020 • Shujian Yu, Francesco Alesiani, Ammar Shaker, Wenzhe Yin
We present a novel methodology to jointly perform multi-task learning and infer intrinsic relationship among tasks by an interpretable and sparse graph.
no code implementations • 11 Sep 2020 • Francesco Alesiani, Shujian Yu, Ammar Shaker, Wenzhe Yin
Interpretable Multi-Task Learning can be expressed as learning a sparse graph of the task relationship based on the prediction performance of the learned models.
1 code implementation • 5 May 2020 • Shujian Yu, Ammar Shaker, Francesco Alesiani, Jose C. Principe
We propose a simple yet powerful test statistic to quantify the discrepancy between two conditional distributions.
1 code implementation • 10 Nov 2019 • Ammar Shaker, Eyke Hüllermeier
The problem of adaptive learning from evolving and possibly non-stationary data streams has attracted a lot of interest in machine learning in the recent past, and also stimulated research in related fields, such as computational intelligence and fuzzy systems.
no code implementations • 14 Nov 2018 • Xiao He, Francesco Alesiani, Ammar Shaker
Scaling up MTL methods to problems with a tremendous number of tasks is a big challenge.
no code implementations • 17 Apr 2018 • Jihed Khiari, Luis Moreira-Matias, Ammar Shaker, Bernard Zenko, Saso Dzeroski
The proposed method and meta-features are designed in such a way that they enable good predictive performance even in subregions of space which are not adequately represented in the available training data.