1 code implementation • 31 Oct 2024 • Haritz Puerto, Martin Gubri, Sangdoo Yun, Seong Joon Oh
Membership inference attacks (MIA) attempt to verify the membership of a given data sample in the training set for a model.
1 code implementation • 3 Jul 2024 • Haritz Puerto, Tilek Chubakov, Xiaodan Zhu, Harish Tayyar Madabushi, Iryna Gurevych
In fact, it has been found that instruction tuning on these intermediary reasoning steps improves model performance.
1 code implementation • 18 Jan 2024 • Haritz Puerto, Martin Tutek, Somak Aditya, Xiaodan Zhu, Iryna Gurevych
Reasoning is a fundamental component of language understanding.
no code implementations • 29 Jun 2023 • Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge
Many recent improvements in NLP stem from the development and use of large pre-trained language models (PLMs) with billions of parameters.
1 code implementation • 31 May 2023 • Haishuo Fang, Haritz Puerto, Iryna Gurevych
To evaluate the effectiveness of UKP-SQuARE in teaching scenarios, we adopted it in a postgraduate NLP course and surveyed the students after the course.
1 code implementation • 31 Mar 2023 • Haritz Puerto, Tim Baumgärtner, Rachneet Sachdeva, Haishuo Fang, Hao Zhang, Sewin Tariverdian, Kexin Wang, Iryna Gurevych
To ease research in multi-agent models, we extend UKP-SQuARE, an online platform for QA research, to support three families of multi-agent systems: i) agent selection, ii) early-fusion of agents, and iii) late-fusion of agents.
1 code implementation • 19 Aug 2022 • Rachneet Sachdeva, Haritz Puerto, Tim Baumgärtner, Sewin Tariverdian, Hao Zhang, Kexin Wang, Hossain Shaikh Saadi, Leonardo F. R. Ribeiro, Iryna Gurevych
In this paper, we introduce SQuARE v2, the new version of SQuARE, to provide an explainability infrastructure for comparing models based on methods such as saliency maps and graph-based explanations.
1 code implementation • ACL 2022 • Tim Baumgärtner, Kexin Wang, Rachneet Sachdeva, Max Eichler, Gregor Geigle, Clifton Poth, Hannah Sterz, Haritz Puerto, Leonardo F. R. Ribeiro, Jonas Pfeiffer, Nils Reimers, Gözde Gül Şahin, Iryna Gurevych
Recent advances in NLP and information retrieval have given rise to a diverse set of question answering tasks that are of different formats (e. g., extractive, abstractive), require different model architectures (e. g., generative, discriminative), and setups (e. g., with or without retrieval).
1 code implementation • 3 Dec 2021 • Haritz Puerto, Gözde Gül Şahin, Iryna Gurevych
The recent explosion of question answering (QA) datasets and models has increased the interest in the generalization of models across multiple domains and formats by either training on multiple datasets or by combining multiple models.