Search Results for author: Filip Klubička

Found 10 papers, 4 papers with code

Idioms, Probing and Dangerous Things: Towards Structural Probing for Idiomaticity in Vector Space

no code implementations27 Apr 2023 Filip Klubička, Vasudevan Nedumpozhimana, John D. Kelleher

The goal of this paper is to learn more about how idiomatic information is structurally encoded in embeddings, using a structural probing method.

Open-Ended Question Answering

Probing Taxonomic and Thematic Embeddings for Taxonomic Information

no code implementations25 Jan 2023 Filip Klubička, John D. Kelleher

Modelling taxonomic and thematic relatedness is important for building AI with comprehensive natural language understanding.

Natural Language Understanding

Probing with Noise: Unpicking the Warp and Weft of Embeddings

1 code implementation21 Oct 2022 Filip Klubička, John D. Kelleher

Improving our understanding of how information is encoded in vector space can yield valuable interpretability insights.

Sentence

Is it worth it? Budget-related evaluation metrics for model selection

no code implementations LREC 2018 Filip Klubička, Giancarlo D. Salton, John D. Kelleher

Creating a linguistic resource is often done by using a machine learning model that filters the content that goes through to a human annotator, before going into the final resource.

Model Selection

Examining a hate speech corpus for hate speech detection and popularity prediction

1 code implementation12 May 2018 Filip Klubička, Raquel Fernández

As research on hate speech becomes more and more relevant every day, most of it is still focused on hate speech detection.

Hate Speech Detection

Quantitative Fine-Grained Human Evaluation of Machine Translation Systems: a Case Study on English to Croatian

1 code implementation2 Feb 2018 Filip Klubička, Antonio Toral, Víctor M. Sánchez-Cartagena

This paper presents a quantitative fine-grained manual evaluation approach to comparing the performance of different machine translation (MT) systems.

Machine Translation Sentence +1

Fine-grained human evaluation of neural versus phrase-based machine translation

1 code implementation14 Jun 2017 Filip Klubička, Antonio Toral, Víctor M. Sánchez-Cartagena

We compare three approaches to statistical machine translation (pure phrase-based, factored phrase-based and neural) by performing a fine-grained manual evaluation via error annotation of the systems' outputs.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.