Search Results for author: Jason Lucas

Found 2 papers, 1 papers with code

Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation

1 code implementation24 Oct 2023 Jason Lucas, Adaku Uchendu, Michiharu Yamashita, Jooyoung Lee, Shaurya Rohatgi, Dongwon Lee

Recent ubiquity and disruptive impacts of large language models (LLMs) have raised concerns about their potential to be misused (. i. e, generating large-scale harmful and misleading content).

Cannot find the paper you are looking for? You can Submit a new open access paper.