Search Results for author: Michal Štefánik

Found 14 papers, 7 papers with code

Adaptor: Objective-Centric Adaptation Framework for Language Models

1 code implementation ACL 2022 Michal Štefánik, Vít Novotný, Nikola Groverová, Petr Sojka

Progress in natural language processing research is catalyzed by the possibilities given by the widespread software frameworks.

Unsupervised Domain Adaptation

Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems

1 code implementation24 May 2023 Marek Kadlčík, Michal Štefánik, Ondřej Sotolář, Vlastimil Martinek

We address this deficiency by creating Calc-X, a collection of datasets that demonstrates the appropriate use of a calculator in reasoning chains.

Arithmetic Reasoning GSM8K +1

Text classification with word embedding regularization and soft similarity measure

1 code implementation10 Mar 2020 Vít Novotný, Eniafe Festus Ayetiran, Michal Štefánik, Petr Sojka

In our work, we investigate the individual and joint effect of the two word embedding regularization techniques on the document processing speed and the task performance of the SCM and the WMD on text classification.

Document Classification General Classification +4

When FastText Pays Attention: Efficient Estimation of Word Representations using Constrained Positional Weighting

1 code implementation19 Apr 2021 Vít Novotný, Michal Štefánik, Eniafe Festus Ayetiran, Petr Sojka, Radim Řehůřek

In 2018, Mikolov et al. introduced the positional language model, which has characteristics of attention-based neural machine translation models and which achieved state-of-the-art performance on the intrinsic word analogy task.

Language Modelling Machine Translation +1

Resources and Few-shot Learners for In-context Learning in Slavic Languages

1 code implementation4 Apr 2023 Michal Štefánik, Marek Kadlčík, Piotr Gramacki, Petr Sojka

Despite the rapid recent progress in creating accurate and compact in-context learners, most recent work focuses on in-context learning (ICL) for tasks in English.

In-Context Learning

Regressive Ensemble for Machine Translation Quality Evaluation

1 code implementation WMT (EMNLP) 2021 Michal Štefánik, Vít Novotný, Petr Sojka

This work introduces a simple regressive ensemble for evaluating machine translation quality based on a set of novel and established metrics.

Machine Translation Translation

WebMIaS on Docker: Deploying Math-Aware Search in a Single Line of Code

no code implementations1 Jun 2021 Dávid Lupták, Vít Novotný, Michal Štefánik, Petr Sojka

Math informational retrieval (MIR) search engines are absent in the wide-spread production use, even though documents in the STEM fields contain many mathematical formulae, which are sometimes more important than text for understanding.

Math Retrieval

Methods for Estimating and Improving Robustness of Language Models

no code implementations16 Jun 2022 Michal Štefánik

Despite their outstanding performance, large language models (LLMs) suffer notorious flaws related to their preference for simple, surface-level textual relations over full semantic complexity of the problem.

Soft Alignment Objectives for Robust Adaptation of Language Generation

1 code implementation29 Nov 2022 Michal Štefánik, Marek Kadlčík, Petr Sojka

Domain adaptation allows generative language models to address specific flaws caused by the domain shift of their application.

Domain Adaptation Machine Translation +4

Can In-context Learners Learn a Reasoning Concept from Demonstrations?

no code implementations3 Dec 2022 Michal Štefánik, Marek Kadlčík

We find that most of the recent in-context learners can not consistently benefit from the demonstrated concepts, irrespective of the model size.

Few-Shot Learning In-Context Learning

Think Twice: Measuring the Efficiency of Eliminating Prediction Shortcuts of Question Answering Models

no code implementations11 May 2023 Lukáš Mikula, Michal Štefánik, Marek Petrovič, Petr Sojka

We find that while existing debiasing methods can mitigate reliance on a chosen spurious feature, the OOD performance gains of these methods can not be explained by mitigated reliance on biased features, suggesting that biases are shared among different QA datasets.

Question Answering

Concept-aware Training Improves In-context Learning Ability of Language Models

no code implementations23 May 2023 Michal Štefánik, Marek Kadlčík

Many recent language models (LMs) of Transformers family exhibit so-called in-context learning (ICL) ability, manifested in the LMs' ability to modulate their function by a task described in a natural language input.

In-Context Learning

People and Places of Historical Europe: Bootstrapping Annotation Pipeline and a New Corpus of Named Entities in Late Medieval Texts

no code implementations26 May 2023 Vít Novotný, Kristýna Luger, Michal Štefánik, Tereza Vrabcová, Aleš Horák

Although pre-trained named entity recognition (NER) models are highly accurate on modern corpora, they underperform on historical texts due to differences in language OCR errors.

Information Retrieval named-entity-recognition +6

Cannot find the paper you are looking for? You can Submit a new open access paper.