1 code implementation • 22 Jan 2024 • Jonas Wallat, Adam Jatowt, Avishek Anand
In this study, we aim to investigate the underlying limitations of general-purpose LLMs when deployed for tasks that require a temporal understanding.
1 code implementation • 29 Jul 2023 • Soumyadeep Roy, Jonas Wallat, Sowmya S Sundaram, Wolfgang Nejdl, Niloy Ganguly
Large-scale language models such as DNABert and LOGO aim to learn optimal gene representations and are trained on the entire Human Reference Genome.
1 code implementation • 12 Jun 2023 • Jonas Wallat, Tianyi Zhang, Avishek Anand
To foster reproducibility, the code, as well as the data used in this paper, are openly available.
1 code implementation • 14 Feb 2023 • Niloy Ganguly, Dren Fazlija, Maryam Badar, Marco Fisichella, Sandipan Sikdar, Johanna Schrader, Jonas Wallat, Koustav Rudra, Manolis Koubarakis, Gourab K. Patro, Wadhah Zai El Amri, Wolfgang Nejdl
This review aims to provide the reader with an overview of causal methods that have been developed to improve the trustworthiness of AI models.
no code implementations • 4 Nov 2022 • Avishek Anand, Lijun Lyu, Maximilian Idahl, Yumeng Wang, Jonas Wallat, Zijian Zhang
Explainable information retrieval is an emerging research area aiming to make transparent and trustworthy information retrieval systems.
1 code implementation • EMNLP (BlackboxNLP) 2020 • Jonas Wallat, Jaspreet Singh, Avishek Anand
We found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question-answering.
1 code implementation • 19 Oct 2020 • Jonas Wallat, Jaspreet Singh, Avishek Anand
We found that ranking models forget the least and retain more knowledge in their final layer.