Search Results for author: Yida Mu

Found 12 papers, 4 papers with code

Addressing Topic Granularity and Hallucination in Large Language Models for Topic Modelling

1 code implementation1 May 2024 Yida Mu, Peizhen Bai, Kalina Bontcheva, Xingyi Song

In this paper, we focus on addressing the issues of topic granularity and hallucinations for better LLM-based topic modelling.

Hallucination Topic Classification

Large Language Models Offer an Alternative to the Traditional Approach of Topic Modelling

no code implementations24 Mar 2024 Yida Mu, Chun Dong, Kalina Bontcheva, Xingyi Song

Topic modelling, as a well-established unsupervised technique, has found extensive use in automatically detecting significant topics within a corpus of documents.

Don't Waste a Single Annotation: Improving Single-Label Classifiers Through Soft Labels

no code implementations9 Nov 2023 Ben Wu, Yue Li, Yida Mu, Carolina Scarton, Kalina Bontcheva, Xingyi Song

In this paper, we address the limitations of the common data annotation and training methods for objective single-label classification tasks.

Examining Temporal Bias in Abusive Language Detection

no code implementations25 Sep 2023 Mali Jin, Yida Mu, Diana Maynard, Kalina Bontcheva

The use of abusive language online has become an increasingly pervasive problem that damages both individuals and society, with effects ranging from psychological harm right through to escalation to real-life violence and even death.

Abusive Language

Examining the Limitations of Computational Rumor Detection Models Trained on Static Datasets

no code implementations20 Sep 2023 Yida Mu, Xingyi Song, Kalina Bontcheva, Nikolaos Aletras

A crucial aspect of a rumor detection model is its ability to generalize, particularly its ability to detect emerging, previously unknown rumors.

Navigating Prompt Complexity for Zero-Shot Classification: A Study of Large Language Models in Computational Social Science

no code implementations23 May 2023 Yida Mu, Ben P. Wu, William Thorne, Ambrose Robinson, Nikolaos Aletras, Carolina Scarton, Kalina Bontcheva, Xingyi Song

Instruction-tuned Large Language Models (LLMs) have exhibited impressive language understanding and the capacity to generate responses that follow specific prompts.

Zero-Shot Learning

A Large-Scale Comparative Study of Accurate COVID-19 Information versus Misinformation

no code implementations10 Apr 2023 Yida Mu, Ye Jiang, Freddy Heppell, Iknoor Singh, Carolina Scarton, Kalina Bontcheva, Xingyi Song

This motivated us to carry out a comparative study of the characteristics of COVID-19 misinformation versus those of accurate COVID-19 information through a large-scale computational analysis of over 242 million tweets.


Examining Temporalities on Stance Detection towards COVID-19 Vaccination

no code implementations10 Apr 2023 Yida Mu, Mali Jin, Kalina Bontcheva, Xingyi Song

It is crucial for policymakers to have a comprehensive understanding of the public's stance towards vaccination on a large scale.

Stance Classification Stance Detection

VaxxHesitancy: A Dataset for Studying Hesitancy towards COVID-19 Vaccination on Twitter

1 code implementation17 Jan 2023 Yida Mu, Mali Jin, Charlie Grimshaw, Carolina Scarton, Kalina Bontcheva, Xingyi Song

Annotated data is also necessary for training data-driven models for more nuanced analysis of attitudes towards vaccination.

Language Modelling

Identifying and Characterizing Active Citizens who Refute Misinformation in Social Media

1 code implementation21 Apr 2022 Yida Mu, Pu Niu, Nikolaos Aletras

The phenomenon of misinformation spreading in social media has developed a new form of active citizens who focus on tackling the problem by refuting posts that might contain misinformation.


Cannot find the paper you are looking for? You can Submit a new open access paper.