Search Results for author: Lingzi Hong

Found 3 papers, 1 papers with code

Outcome-Constrained Large Language Models for Countering Hate Speech

no code implementations25 Mar 2024 Lingzi Hong, Pengcheng Luo, Eduardo Blanco, Xiaoying Song

We first explore methods that utilize large language models (LLM) to generate counterspeech constrained by potential conversation outcomes.

Text Generation

Hate Cannot Drive out Hate: Forecasting Conversation Incivility following Replies to Hate Speech

no code implementations8 Dec 2023 Xinchen Yu, Eduardo Blanco, Lingzi Hong

A linguistic analysis uncovers the differences in the language of replies that elicit follow-up conversations with high and low incivility.

Cannot find the paper you are looking for? You can Submit a new open access paper.