Search Results for author: Bolei Ma

Found 9 papers, 6 papers with code

Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think

no code implementations12 Apr 2024 Xinpeng Wang, Chengzhi Hu, Bolei Ma, Paul Röttger, Barbara Plank

We show that the text answers are more robust to question perturbations than the first token probabilities, when the first token answers mismatch the text answers.

Multiple-choice

Decomposed Prompting: Unveiling Multilingual Linguistic Structure Knowledge in English-Centric Large Language Models

no code implementations28 Feb 2024 Ercong Nie, Shuzhou Yuan, Bolei Ma, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze

Despite the predominance of English in their training data, English-centric Large Language Models (LLMs) like GPT-3 and LLaMA display a remarkable ability to perform multilingual tasks, raising questions about the depth and nature of their cross-lingual capabilities.

Part-Of-Speech Tagging Sentence

Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers

no code implementations18 Feb 2024 Shuzhou Yuan, Ercong Nie, Bolei Ma, Michael Färber

Large Language Models (LLMs) possess outstanding capabilities in addressing various natural language processing (NLP) tasks.

text-classification Text Classification

ToPro: Token-Level Prompt Decomposition for Cross-Lingual Sequence Labeling Tasks

1 code implementation29 Jan 2024 Bolei Ma, Ercong Nie, Shuzhou Yuan, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze

However, most previous studies primarily focused on sentence-level classification tasks, and only a few considered token-level labeling tasks such as Named Entity Recognition (NER) and Part-of-Speech (POS) tagging.

Benchmarking In-Context Learning +8

Annotation Sensitivity: Training Data Collection Methods Affect Model Performance

1 code implementation23 Nov 2023 Christoph Kern, Stephanie Eckman, Jacob Beck, Rob Chew, Bolei Ma, Frauke Kreuter

We introduce the term annotation sensitivity to refer to the impact of annotation data collection methods on the annotations themselves and on downstream model performance and predictions.

Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models

1 code implementation3 Aug 2023 Zheyu Zhang, Han Yang, Bolei Ma, David Rügamer, Ercong Nie

Large Language Models (LLMs) demonstrate remarkable performance on a variety of natural language understanding (NLU) tasks, primarily due to their in-context learning ability.

In-Context Learning Natural Language Understanding +1

Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language Understanding

1 code implementation15 Jul 2023 Bolei Ma, Ercong Nie, Helmut Schmid, Hinrich Schütze

We conduct comprehensive experiments on diverse cross-lingual language understanding tasks (sentiment classification, paraphrase identification, and natural language inference) and empirically analyze the variation trends of prompt-based finetuning performance in cross-lingual transfer across different few-shot and full-data settings.

Natural Language Inference Natural Language Understanding +4

What cleaves? Is proteasomal cleavage prediction reaching a ceiling?

1 code implementation24 Oct 2022 Ingo Ziegler, Bolei Ma, Ercong Nie, Bernd Bischl, David Rügamer, Benjamin Schubert, Emilio Dorigatti

While direct identification of proteasomal cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to implicitly infer cleavage events from the termini of MHC-presented epitopes, which can be detected in large amounts thanks to recent advances in high-throughput MHC ligandomics.

Benchmarking Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.