Search Results for author: Manolis Koubarakis

Found 6 papers, 3 papers with code

Efficient Learning of Multiple NLP Tasks via Collective Weight Factorization on BERT

no code implementations Findings (NAACL) 2022 Christos Papadopoulos, Yannis Panagakis, Manolis Koubarakis, Mihalis Nicolaou

We test our proposed method on finetuning multiple natural language understanding tasks by employing BERT-Large as an instantiation of the Transformer and the GLUE as the evaluation benchmark.

Natural Language Understanding

Reasoning over Description Logic-based Contexts with Transformers

no code implementations15 Nov 2023 Angelos Poulis, Eleni Tsalapati, Manolis Koubarakis

One way that the current state of the art measures the reasoning ability of transformer-based models is by evaluating accuracy in downstream tasks like logical question answering or proof generation over synthetic contexts expressed in natural language.

Question Answering

A Review of the Role of Causality in Developing Trustworthy AI Systems

1 code implementation14 Feb 2023 Niloy Ganguly, Dren Fazlija, Maryam Badar, Marco Fisichella, Sandipan Sikdar, Johanna Schrader, Jonas Wallat, Koustav Rudra, Manolis Koubarakis, Gourab K. Patro, Wadhah Zai El Amri, Wolfgang Nejdl

This review aims to provide the reader with an overview of causal methods that have been developed to improve the trustworthiness of AI models.

Cannot find the paper you are looking for? You can Submit a new open access paper.