no code implementations • EMNLP (ArgMining) 2021 • Muhammad Mahad Afzal Bhatti, Ahsan Suheer Ahmad, Joonsuk Park
In this paper, we propose a novel problem formulation to mine arguments from Twitter: We formulate argument mining on Twitter as a text classification task to identify tweets that serve as premises for a hashtag that represents a claim of interest.
no code implementations • EMNLP (FEVER) 2021 • Aalok Sathe, Joonsuk Park
Automatic fact-checking is crucial for recognizing misinformation spreading on the internet.
no code implementations • 24 Mar 2024 • Henning Wachsmuth, Gabriella Lapesa, Elena Cabrio, Anne Lauscher, Joonsuk Park, Eva Maria Vecchi, Serena Villata, Timon Ziegenbein
The computational treatment of arguments on controversial issues has been subject to extensive NLP research, due to its envisioned impact on opinion formation, decision making, writing education, and the like.
no code implementations • 21 Mar 2024 • Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A. Alghamdi, Tal August, Avinash Bhat, Madiha Zahrah Choksi, Senjuti Dutta, Jin L. C. Guo, Md Naimul Hoque, Yewon Kim, Simon Knight, Seyed Parsa Neshaei, Agnia Sergeyuk, Antonette Shibani, Disha Shrivastava, Lila Shroff, Jessi Stark, Sarah Sterman, Sitong Wang, Antoine Bosselut, Daniel Buschek, Joseph Chee Chang, Sherol Chen, Max Kreminski, Joonsuk Park, Roy Pea, Eugenia H. Rho, Shannon Zejiang Shen, Pao Siangliulue
In our era of rapid technological advancement, the research landscape for writing assistants has become increasingly fragmented across various research communities.
no code implementations • 16 Nov 2023 • Minbeom Kim, Jahyun Koo, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung
As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial.
no code implementations • 14 Nov 2023 • Nohil Park, Joonsuk Park, Kang Min Yoo, Sungroh Yoon
An exciting advancement in the field of multilingual models is the emergence of autoregressive models with zero- and few-shot capabilities, a phenomenon widely reported in large-scale language models.
1 code implementation • 27 Oct 2023 • Dongjun Kang, Joonsuk Park, Yohan Jo, JinYeong Bak
Being able to predict people's opinions on issues and behaviors in realistic scenarios can be helpful in various domains, such as politics and marketing.
1 code implementation • 23 Oct 2023 • Gangwoo Kim, Sungdong Kim, Byeongguk Jeon, Joonsuk Park, Jaewoo Kang
To cope with the challenge, we propose a novel framework, Tree of Clarifications (ToC): It recursively constructs a tree of disambiguations for the AQ -- via few-shot prompting leveraging external knowledge -- and uses it to generate a long-form answer.
1 code implementation • 28 May 2023 • Hwaran Lee, Seokhee Hong, Joonsuk Park, Takyoung Kim, Gunhee Kim, Jung-Woo Ha
Large language models (LLMs) learn not only natural text generation abilities but also social biases against different demographic groups from real-world data.
1 code implementation • 28 May 2023 • Hwaran Lee, Seokhee Hong, Joonsuk Park, Takyoung Kim, Meeyoung Cha, Yejin Choi, Byoung Pil Kim, Gunhee Kim, Eun-Ju Lee, Yong Lim, Alice Oh, Sangchul Park, Jung-Woo Ha
The potential social harms that large language models pose, such as generating offensive content and reinforcing biases, are steeply rising.
1 code implementation • 23 May 2023 • Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung
We first present CAMBIGNQ, a dataset consisting of 5, 654 ambiguous questions, each with relevant passages, possible answers, and a clarification question.
no code implementations • 21 Dec 2022 • Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.
1 code implementation • 24 May 2022 • Miyoung Ko, Ingyu Seong, Hwaran Lee, Joonsuk Park, Minsuk Chang, Minjoon Seo
With the growing importance of detecting misinformation, many studies have focused on verifying factual claims by retrieving evidence.
1 code implementation • Findings (NAACL) 2022 • Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.
no code implementations • Findings (ACL) 2022 • Kyungjae Lee, Wookje Han, Seung-won Hwang, Hwaran Lee, Joonsuk Park, Sang-Woo Lee
To this end, we first propose a novel task--Continuously-updated QA (CuQA)--in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge.
no code implementations • LREC 2020 • Aalok Sathe, Salar Ather, Tuan Manh Le, Nathan Perry, Joonsuk Park
However, such datasets suffer from limited applicability due to the synthetic nature of claims and/or evidence written by annotators that differ from real claims and evidence on the internet.
1 code implementation • ACL 2017 • Vlad Niculae, Joonsuk Park, Claire Cardie
We propose a novel factor graph model for argument mining, designed for settings in which the argumentative relations in a document do not necessarily form a tree structure.
no code implementations • LREC 2016 • Barbara Konat, John Lawrence, Joonsuk Park, Katarzyna Budzynska, Chris Reed
Governments are increasingly utilising online platforms in order to engage with, and ascertain the opinions of, their citizens.