Search Results for author: Helmut Schmid

Found 28 papers, 6 papers with code

Why don’t people use character-level machine translation?

no code implementations Findings (ACL) 2022 Jindřich Libovický, Helmut Schmid, Alexander Fraser

We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT).

Machine Translation Translation

Decomposed Prompting: Unveiling Multilingual Linguistic Structure Knowledge in English-Centric Large Language Models

no code implementations28 Feb 2024 Ercong Nie, Shuzhou Yuan, Bolei Ma, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze

Despite the predominance of English in their training data, English-centric Large Language Models (LLMs) like GPT-3 and LLaMA display a remarkable ability to perform multilingual tasks, raising questions about the depth and nature of their cross-lingual capabilities.

Part-Of-Speech Tagging Sentence

GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network

no code implementations18 Feb 2024 Shuzhou Yuan, Ercong Nie, Michael Färber, Helmut Schmid, Hinrich Schütze

Large Language Models (LLMs) exhibit strong In-Context Learning (ICL) capabilities when prompts with demonstrations are applied to them.

In-Context Learning text-classification +1

ToPro: Token-Level Prompt Decomposition for Cross-Lingual Sequence Labeling Tasks

1 code implementation29 Jan 2024 Bolei Ma, Ercong Nie, Shuzhou Yuan, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze

However, most previous studies primarily focused on sentence-level classification tasks, and only a few considered token-level labeling tasks such as Named Entity Recognition (NER) and Part-of-Speech (POS) tagging.

Benchmarking In-Context Learning +8

Unleashing the Multilingual Encoder Potential: Boosting Zero-Shot Performance via Probability Calibration

1 code implementation8 Oct 2023 Ercong Nie, Helmut Schmid, Hinrich Schütze

Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguistic probing by reformulating the input examples into cloze-style prompts.

Position

Cross-Lingual Constituency Parsing for Middle High German: A Delexicalized Approach

no code implementations9 Aug 2023 Ercong Nie, Helmut Schmid, Hinrich Schütze

However, training an automatic syntactic analysis system for ancient languages solely relying on annotated parse data is a formidable task due to the inherent challenges in building treebanks for such languages.

Constituency Parsing Cross-Lingual Transfer

Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language Understanding

1 code implementation15 Jul 2023 Bolei Ma, Ercong Nie, Helmut Schmid, Hinrich Schütze

We conduct comprehensive experiments on diverse cross-lingual language understanding tasks (sentiment classification, paraphrase identification, and natural language inference) and empirically analyze the variation trends of prompt-based finetuning performance in cross-lingual transfer across different few-shot and full-data settings.

Natural Language Inference Natural Language Understanding +4

Cross-Lingual Retrieval Augmented Prompt for Low-Resource Languages

1 code implementation19 Dec 2022 Ercong Nie, Sheng Liang, Helmut Schmid, Hinrich Schütze

Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies.

Cross-Lingual Transfer Natural Language Inference +3

Why don't people use character-level machine translation?

no code implementations15 Oct 2021 Jindřich Libovický, Helmut Schmid, Alexander Fraser

We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT).

Machine Translation Translation

Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification

2 code implementations COLING 2020 Timo Schick, Helmut Schmid, Hinrich Schütze

A recent approach for few-shot text classification is to convert textual inputs to cloze questions that contain some form of task description, process them with a pretrained language model and map the predicted words to labels.

Few-Shot Text Classification General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.