no code implementations • 24 Oct 2024 • Sadat Shahriar, Zheng Qi, Nikolaos Pappas, Srikanth Doss, Monica Sunkara, Kishaloy Halder, Manuel Mager, Yassine Benajiba
Aligning Large Language Models (LLM) to address subjectivity and nuanced preference levels requires adequate flexibility and control, which can be a resource-intensive and time-consuming procedure.
1 code implementation • 20 Jun 2024 • Navid Ayoobi, Sadat Shahriar, Arjun Mukherjee
By providing cues in human-written and LLM-generated news, we can help individuals increase their skepticism towards fake LLM-generated news.
1 code implementation • 19 Feb 2024 • Shubhashis Roy Dipta, Sadat Shahriar
This paper describes our system developed for SemEval-2024 Task 8, ``Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection'' Machine-generated texts have been one of the main concerns due to the use of large language models (LLM) in fake text generation, phishing, cheating in exams, or even plagiarizing copyright materials.
1 code implementation • 21 Jul 2023 • Navid Ayoobi, Sadat Shahriar, Arjun Mukherjee
We show that the suggested method can distinguish between legitimate and fake profiles with an accuracy of about 95% across all word embeddings.
1 code implementation • 1 May 2023 • Sadat Shahriar, Thamar Solorio
Subjectivity and difference of opinion are key social phenomena, and it is crucial to take these into account in the annotation and detection process of derogatory textual content.
no code implementations • 1 May 2023 • Sadat Shahriar, Arjun Mukherjee, Omprakash Gnawali
In this era of information explosion, deceivers use different domains or mediums of information to exploit the users, such as News, Emails, and Tweets.