no code implementations • WMT (EMNLP) 2020 • Jindřich Libovický, Viktor Hangya, Helmut Schmid, Alexander Fraser
We present our systems for the WMT20 Very Low Resource MT Task for translation between German and Upper Sorbian.
no code implementations • Findings (ACL) 2022 • Jindřich Libovický, Helmut Schmid, Alexander Fraser
We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT).
no code implementations • 28 Feb 2024 • Ercong Nie, Shuzhou Yuan, Bolei Ma, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze
Despite the predominance of English in their training data, English-centric Large Language Models (LLMs) like GPT-3 and LLaMA display a remarkable ability to perform multilingual tasks, raising questions about the depth and nature of their cross-lingual capabilities.
no code implementations • 18 Feb 2024 • Shuzhou Yuan, Ercong Nie, Michael Färber, Helmut Schmid, Hinrich Schütze
Large Language Models (LLMs) exhibit strong In-Context Learning (ICL) capabilities when prompts with demonstrations are applied to them.
1 code implementation • 29 Jan 2024 • Bolei Ma, Ercong Nie, Shuzhou Yuan, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze
However, most previous studies primarily focused on sentence-level classification tasks, and only a few considered token-level labeling tasks such as Named Entity Recognition (NER) and Part-of-Speech (POS) tagging.
1 code implementation • 8 Oct 2023 • Ercong Nie, Helmut Schmid, Hinrich Schütze
Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguistic probing by reformulating the input examples into cloze-style prompts.
no code implementations • 9 Aug 2023 • Ercong Nie, Helmut Schmid, Hinrich Schütze
However, training an automatic syntactic analysis system for ancient languages solely relying on annotated parse data is a formidable task due to the inherent challenges in building treebanks for such languages.
1 code implementation • 15 Jul 2023 • Bolei Ma, Ercong Nie, Helmut Schmid, Hinrich Schütze
We conduct comprehensive experiments on diverse cross-lingual language understanding tasks (sentiment classification, paraphrase identification, and natural language inference) and empirically analyze the variation trends of prompt-based finetuning performance in cross-lingual transfer across different few-shot and full-data settings.
Natural Language Inference Natural Language Understanding +4
1 code implementation • 20 May 2023 • Ayyoob Imani, Peiqin Lin, Amir Hossein Kargaran, Silvia Severini, Masoud Jalili Sabet, Nora Kassner, Chunlan Ma, Helmut Schmid, André F. T. Martins, François Yvon, Hinrich Schütze
The NLP community has mainly focused on scaling Large Language Models (LLMs) vertically, i. e., making them better for about 100 languages.
1 code implementation • 19 Dec 2022 • Ercong Nie, Sheng Liang, Helmut Schmid, Hinrich Schütze
Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies.
no code implementations • 15 Oct 2021 • Jindřich Libovický, Helmut Schmid, Alexander Fraser
We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT).
2 code implementations • COLING 2020 • Timo Schick, Helmut Schmid, Hinrich Schütze
A recent approach for few-shot text classification is to convert textual inputs to cloze questions that contain some form of task description, process them with a pretrained language model and map the predicted words to labels.
no code implementations • CL 2017 • Hassan Sajjad, Helmut Schmid, Alex Fraser, er, Hinrich Sch{\"u}tze
After training, the unlabeled data is disambiguated based on the posterior probabilities of the two sub-models.