no code implementations • 26 May 2025 • Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang
Despite the remarkable capabilities of Language Models (LMs) across diverse tasks, no single model consistently outperforms others, necessitating efficient methods to combine their strengths without expensive retraining.
no code implementations • 17 Mar 2025 • Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang
Language Models (LMs) are widely used in software engineering for code generation, but they may produce code with errors.
no code implementations • 20 Dec 2024 • Guoxiang Guo, Aldeida Aleti, Neelofar Neelofar, Chakkrit Tantithamthavorn
In this paper, we propose MORTAR, a MetamORphic multi-TuRn diAlogue testing appRoach, which mitigates the test oracle problem in the assessment of LLM-based dialogue systems.
no code implementations • 30 Oct 2024 • Lam Nguyen Tung, Steven Cho, Xiaoning Du, Neelofar Neelofar, Valerio Terragni, Stefano Ruberto, Aldeida Aleti
We compare TOKI with a naive baseline based solely on model confidence and TOKI-guided adversarial attack method with A2T, a SOTA adversarial attack method.
no code implementations • 17 Jun 2024 • Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang
Existing work, such as parameter-efficient finetuning (PEFT), often focuses on \textit{how to finetune} but neglects the issue of \textit{where to finetune}.
no code implementations • 29 Jan 2024 • Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang
To address this, we propose complementing in-context learning with an additional clustering operation.
no code implementations • 8 Dec 2023 • Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang
MINT leverages the semantic property of language models to perform neuron-level repairs in a novel way.
no code implementations • 10 Feb 2020 • Aldeida Aleti, Matias Martinez
We introduce a new approach, Explaining Automated Program Repair (E-APR), which identifies features of buggy programs that explain why a particular instance is difficult for an APR technique.
Software Engineering
no code implementations • 9 Jan 2020 • Mark Wallace, Aldeida Aleti
For most practical optimisation problems local search outperforms random sampling - despite the "No Free Lunch Theorem".
1 code implementation • 5 Dec 2019 • Aldeida Aleti, Mark Wallace, Markus Wagner
Premature convergence can be detrimental to the performance of search methods, which is why many search algorithms include restart strategies to deal with it.
no code implementations • 28 Oct 2019 • Phillip Smith, Robert Hunjet, Aldeida Aleti, Asad Khan
We present in this paper an exertion of our previous work by increasing the robustness and coverage of the evolution search via hybridisation with a state-of-the-art novelty search and accelerate the individual agent behaviour searches via a novel behaviour-component sharing technique.
no code implementations • 28 Oct 2019 • Phillip Smith, Aldeida Aleti, Vincent C. S. Lee, Robert Hunjet, Asad Khan
This new HGN is called Robotic-HGN (R-HGN), as it matches robot environment observations to environment labels via fusion of match probabilities from both temporal and intra-swarm collections.