no code implementations • NAACL (GeBNLP) 2022 • Emeralda Sesari, Max Hort, Federica Sarro
Pre-trained word embedding models are easily distributed and applied, as they alleviate users from the effort to train models themselves.
no code implementations • 15 Jan 2024 • Fernando Vallecillos Ruiz, Anastasiia Grishina, Max Hort, Leon Moonen
We investigate whether this correction capability of Large Language Models (LLMs) extends to Automatic Program Repair (APR).
no code implementations • 5 Jul 2023 • Max Hort, Anastasiia Grishina, Leon Moonen
Large language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair.
1 code implementation • 8 May 2023 • Anastasiia Grishina, Max Hort, Leon Moonen
These findings show that early layers can be used to obtain better results using the same resources, as well as to reduce resource usage during fine-tuning and inference.
no code implementations • 17 Sep 2022 • Minghua Ma, Zhao Tian, Max Hort, Federica Sarro, Hongyu Zhang, QIngwei Lin, Dongmei Zhang
In this paper, we propose an approach for the selection of the initial seeds to generate IDIs for fairness testing.
no code implementations • 14 Jul 2022 • Max Hort, Zhenpeng Chen, Jie M. Zhang, Mark Harman, Federica Sarro
How many datasets are used for evaluating bias mitigation methods?