no code implementations • EMNLP (MRL) 2021 • Réka Cserháti, Gábor Berend
In this work, we analyze the performance and properties of cross-lingual word embedding models created by mapping-based alignment methods.
no code implementations • FNP (COLING) 2020 • Zsolt Szántó, Gábor Berend
This paper introduces our efforts at the FinCasual shared task for modeling causality in financial utterances.
1 code implementation • CMCL (ACL) 2022 • Réka Cserháti, Istvan Kollath, András Kicsi, Gábor Berend
This is a hard challenge even with today’s advanced language technology methods. In our study, we create spymaster agents using four types of relatedness measures that require only a raw text corpus to produce.
no code implementations • EMNLP (sustainlp) 2020 • Norbert Kis-Szabó, Gábor Berend
We propose the technique of quasi-multitask learning (Q-MTL), a simple and easy to implement modification of standard multitask learning, in which the tasks to be modeled are identical.
1 code implementation • NAACL 2022 • Gábor Berend
In this paper, we advocate for using large pre-trained monolingual language models in cross lingual zero-shot word sense disambiguation (WSD) coupled with a contextualized mapping mechanism.
1 code implementation • ACL Findings 2023 • Gábor Berend
In this paper, we propose an alternative to the classic masked language modeling (MLM) pre-training paradigm, where the objective is altered from the reconstruction of the exact identity of randomly selected masked subwords to the prediction of their latent semantic properties.
1 code implementation • ICLR 2020 • Gábor Berend
Finally, we are releasing our multilingual sparse word representations for the 27 typologically diverse set of languages that we conducted our various experiments on.
no code implementations • 25 Sep 2019 • Gábor Berend, Norbert Kis-Szabó
We propose the technique of quasi-multitask learning (Q-MTL), a simple and easy to implement modification of standard multitask learning, in which the tasks to be modeled are identical.
no code implementations • 21 Dec 2016 • Gábor Berend
In this paper we propose and carefully evaluate a sequence labeling framework which solely utilizes sparse indicator features derived from dense distributed word representations.