Search Results for author: Manolis Koubarakis

Found 10 papers, 5 papers with code

Efficient Learning of Multiple NLP Tasks via Collective Weight Factorization on BERT

no code implementations Findings (NAACL) 2022 Christos Papadopoulos, Yannis Panagakis, Manolis Koubarakis, Mihalis Nicolaou

We test our proposed method on finetuning multiple natural language understanding tasks by employing BERT-Large as an instantiation of the Transformer and the GLUE as the evaluation benchmark.

Natural Language Understanding

Transformer-based Language Models for Reasoning in the Description Logic ALCQ

no code implementations12 Oct 2024 Angelos Poulis, Eleni Tsalapati, Manolis Koubarakis

In this way, we systematically investigate the logical reasoning capabilities of a supervised fine-tuned DeBERTa-based model and two large language models (GPT-3. 5, GPT-4) with few-shot prompting.

Logical Reasoning

The Large Language Model GreekLegalRoBERTa

no code implementations10 Oct 2024 Vasileios Saketos, Despina-Athanasia Pantazi, Manolis Koubarakis

We develop four versions of GreekLegalRoBERTa, which are four large language models trained on Greek legal and nonlegal text.

Language Modeling Language Modelling +5

Transformers in the Service of Description Logic-based Contexts

1 code implementation15 Nov 2023 Angelos Poulis, Eleni Tsalapati, Manolis Koubarakis

In this way, we systematically investigate the reasoning ability of a supervised fine-tuned DeBERTa-based model and of two large language models (GPT-3. 5, GPT-4) with few-shot prompting.

Question Answering

A Review of the Role of Causality in Developing Trustworthy AI Systems

1 code implementation14 Feb 2023 Niloy Ganguly, Dren Fazlija, Maryam Badar, Marco Fisichella, Sandipan Sikdar, Johanna Schrader, Jonas Wallat, Koustav Rudra, Manolis Koubarakis, Gourab K. Patro, Wadhah Zai El Amri, Wolfgang Nejdl

This review aims to provide the reader with an overview of causal methods that have been developed to improve the trustworthiness of AI models.

Cannot find the paper you are looking for? You can Submit a new open access paper.