Search Results for author: Vibhor Agarwal

Found 10 papers, 3 papers with code

MedHalu: Hallucinations in Responses to Healthcare Queries by Large Language Models

no code implementations29 Sep 2024 Vibhor Agarwal, Yiqiao Jin, Mohit Chandra, Munmun De Choudhury, Srijan Kumar, Nishanth Sastry

In this work, we conduct a pioneering study of hallucinations in LLM-generated responses to real-world healthcare queries from patients.

Hallucination

Decentralised Moderation for Interoperable Social Networks: A Conversation-based Approach for Pleroma and the Fediverse

1 code implementation3 Apr 2024 Vibhor Agarwal, Aravindh Raman, Nishanth Sastry, Ahmed M. Abdelmoniem, Gareth Tyson, Ignacio Castro

Recent work has exploited the conversational context of a post to improve this automatic tagging, e. g. using the replies to a post to help classify if it contains toxic speech.

TAG

"Which LLM should I use?": Evaluating LLMs for tasks performed by Undergraduate Computer Science Students

no code implementations22 Jan 2024 Vibhor Agarwal, Madhav Krishan Garg, Sahiti Dharmavaram, Dhruv Kumar

This study evaluates the effectiveness of various large language models (LLMs) in performing tasks common among undergraduate computer science students.

Code Generation

GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding

no code implementations21 Oct 2023 Vibhor Agarwal, Yu Chen, Nishanth Sastry

Specifically, we design two novel algorithms that utilise both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation.

Graph Attention Hate Speech Detection

HateRephrase: Zero- and Few-Shot Reduction of Hate Intensity in Online Posts using Large Language Models

no code implementations21 Oct 2023 Vibhor Agarwal, Yu Chen, Nishanth Sastry

We develop 4 different prompts based on task description, hate definition, few-shot demonstrations and chain-of-thoughts for comprehensive experiments and conduct experiments on open-source LLMs such as LLaMA-1, LLaMA-2 chat, Vicuna as well as OpenAI's GPT-3. 5.

AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics

1 code implementation28 Aug 2023 Vahid Ghafouri, Vibhor Agarwal, Yong Zhang, Nishanth Sastry, Jose Such, Guillermo Suarez-Tangil

The introduction of ChatGPT and the subsequent improvement of Large Language Models (LLMs) have prompted more and more individuals to turn to the use of ChatBots, both for information and assistance with decision-making.

Decision Making

AnnoBERT: Effectively Representing Multiple Annotators' Label Choices to Improve Hate Speech Detection

no code implementations20 Dec 2022 Wenjie Yin, Vibhor Agarwal, Aiqi Jiang, Arkaitz Zubiaga, Nishanth Sastry

During training, the model associates annotators with their label choices given a piece of text; during evaluation, when label information is not available, the model predicts the aggregated label given by the participating annotators by utilising the learnt association.

Hate Speech Detection

A Graph-Based Context-Aware Model to Understand Online Conversations

no code implementations16 Nov 2022 Vibhor Agarwal, Anthony P. Young, Sagar Joglekar, Nishanth Sastry

We evaluate GraphNLI on two such tasks - polarity prediction and misogynistic hate speech detection - and found that our model consistently outperforms all relevant baselines for both tasks.

Hate Speech Detection Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.