Search Results for author: Taiqi He

Found 6 papers, 2 papers with code

Constructions Are So Difficult That Even Large Language Models Get Them Right for the Wrong Reasons

1 code implementation26 Mar 2024 Shijia Zhou, Leonie Weissweiler, Taiqi He, Hinrich Schütze, David R. Mortensen, Lori Levin

In this paper, we make a contribution that can be understood from two perspectives: from an NLP perspective, we introduce a small challenge dataset for NLI with large lexical overlap, which minimises the possibility of models discerning entailment solely based on token distinctions, and show that GPT-4 and Llama 2 fail it with strong bias.

Wav2Gloss: Generating Interlinear Glossed Text from Speech

no code implementations19 Mar 2024 Taiqi He, Kwanghee Choi, Lindia Tjuatja, Nathaniel R. Robinson, Jiatong Shi, Shinji Watanabe, Graham Neubig, David R. Mortensen, Lori Levin

Thousands of the world's languages are in danger of extinction--a tremendous threat to cultural identities and human language diversity.

GlossLM: Multilingual Pretraining for Low-Resource Interlinear Glossing

no code implementations11 Mar 2024 Michael Ginn, Lindia Tjuatja, Taiqi He, Enora Rice, Graham Neubig, Alexis Palmer, Lori Levin

A key aspect of language documentation is the creation of annotated text in a format such as interlinear glossed text (IGT), which captures fine-grained morphosyntactic analyses in a morpheme-by-morpheme format.

Hire a Linguist!: Learning Endangered Languages with In-Context Linguistic Descriptions

no code implementations28 Feb 2024 Kexun Zhang, Yee Man Choi, Zhenqiao Song, Taiqi He, William Yang Wang, Lei LI

On the contrary, we observe that 2000 endangered languages, though without a large corpus, have a grammar book or a dictionary.

Construction Grammar Provides Unique Insight into Neural Language Models

no code implementations4 Feb 2023 Leonie Weissweiler, Taiqi He, Naoki Otani, David R. Mortensen, Lori Levin, Hinrich Schütze

Construction Grammar (CxG) has recently been used as the basis for probing studies that have investigated the performance of large pretrained language models (PLMs) with respect to the structure and meaning of constructions.

Position

Cannot find the paper you are looking for? You can Submit a new open access paper.