Search Results for author: Max Hort

Found 6 papers, 1 papers with code

An Empirical Study on the Fairness of Pre-trained Word Embeddings

no code implementations NAACL (GeBNLP) 2022 Emeralda Sesari, Max Hort, Federica Sarro

Pre-trained word embedding models are easily distributed and applied, as they alleviate users from the effort to train models themselves.

Fairness Word Embeddings

An Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Code

no code implementations5 Jul 2023 Max Hort, Anastasiia Grishina, Leon Moonen

Large language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair.

Program Repair

The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification

1 code implementation8 May 2023 Anastasiia Grishina, Max Hort, Leon Moonen

These findings show that early layers can be used to obtain better results using the same resources, as well as to reduce resource usage during fine-tuning and inference.

Code Classification Defect Detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.