Search Results for author: Manolis Koubarakis

Found 6 papers, 4 papers with code

Efficient Learning of Multiple NLP Tasks via Collective Weight Factorization on BERT

no code implementations Findings (NAACL) 2022 Christos Papadopoulos, Yannis Panagakis, Manolis Koubarakis, Mihalis Nicolaou

We test our proposed method on finetuning multiple natural language understanding tasks by employing BERT-Large as an instantiation of the Transformer and the GLUE as the evaluation benchmark.

Natural Language Understanding

Transformers in the Service of Description Logic-based Contexts

1 code implementation15 Nov 2023 Angelos Poulis, Eleni Tsalapati, Manolis Koubarakis

In this way, we systematically investigate the reasoning ability of a supervised fine-tuned DeBERTa-based model and of two large language models (GPT-3. 5, GPT-4) with few-shot prompting.

GPT-3.5 GPT-4 +1

A Review of the Role of Causality in Developing Trustworthy AI Systems

1 code implementation14 Feb 2023 Niloy Ganguly, Dren Fazlija, Maryam Badar, Marco Fisichella, Sandipan Sikdar, Johanna Schrader, Jonas Wallat, Koustav Rudra, Manolis Koubarakis, Gourab K. Patro, Wadhah Zai El Amri, Wolfgang Nejdl

This review aims to provide the reader with an overview of causal methods that have been developed to improve the trustworthiness of AI models.

Cannot find the paper you are looking for? You can Submit a new open access paper.