1 code implementation • CoNLL (EMNLP) 2021 • Hoyun Song, Soo Hyun Ryu, Huije Lee, Jong Park
As users in online communities suffer from severe side effects of abusive language, many researchers attempted to detect abusive texts from social media, presenting several datasets for such detection.
no code implementations • 6 Jun 2024 • Jisu Shin, Hoyun Song, Huije Lee, Soyeong Jeong, Jong C. Park
To this end, we propose a novel strategy to intuitively quantify these social perceptions and suggest metrics that can evaluate the social biases within LLMs by aggregating diverse social perceptions.
1 code implementation • 5 Jun 2023 • Hoyun Song, Jisu Shin, Huije Lee, Jong C. Park
Our detailed analysis shows that the proposed model is effective at leveraging domain knowledge, transferable to other mental disorders, and providing interpretable detection results.
1 code implementation • LREC 2022 • Huije Lee, Young Ju NA, Hoyun Song, Jisu Shin, Jong C. Park
In particular, we constructed a pair-wise dataset that includes troll comments and counter responses with labeled response strategies, which enables models fine-tuned on our dataset to generate responses by varying counter responses according to the specified strategy.