Search Results for author: Sashank Santhanam

Found 16 papers, 6 papers with code

A Survey of Natural Language Generation Techniques with a Focus on Dialogue Systems - Past, Present and Future Directions

no code implementations2 Jun 2019 Sashank Santhanam, Samira Shaikh

We provide a comprehensive review towards building open domain dialogue systems, an important application of natural language generation.

Text Generation World Knowledge

I Stand With You: Using Emojis to Study Solidarity in Crisis Events

1 code implementation19 Jul 2019 Sashank Santhanam, Vidhushini Srinivasan, Shaina Glass, Samira Shaikh

We study how emojis are used to express solidarity in social media in the context of two major crisis events - a natural disaster, Hurricane Irma in 2017 and terrorist attacks that occurred on November 2015 in Paris.

Towards Best Experiment Design for Evaluating Dialogue System Output

1 code implementation WS 2019 Sashank Santhanam, Samira Shaikh

To overcome the limitations of automated metrics (e. g. BLEU, METEOR) for evaluating dialogue systems, researchers typically use human judgments to provide convergent evidence.

Dialogue Evaluation

Natural Language Generation Using Reinforcement Learning with External Rewards

1 code implementation26 Nov 2019 Vidhushini Srinivasan, Sashank Santhanam, Samira Shaikh

We propose an approach towards natural language generation using a bidirectional encoder-decoder which incorporates external rewards through reinforcement learning (RL).

reinforcement-learning Reinforcement Learning (RL) +1

Studying the Effects of Cognitive Biases in Evaluation of Conversational Agents

no code implementations18 Feb 2020 Sashank Santhanam, Alireza Karduni, Samira Shaikh

To investigate, we conducted a between-subjects study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents.

Decision Making Language Modelling

Detecting Asks in SE attacks: Impact of Linguistic and Structural Knowledge

no code implementations25 Feb 2020 Bonnie J. Dorr, Archna Bhatia, Adam Dalton, Brodie Mather, Bryanna Hebenstreit, Sashank Santhanam, Zhuo Cheng, Samira Shaikh, Alan Zemel, Tomek Strzalkowski

Social engineers attempt to manipulate users into undertaking actions such as downloading malware by clicking links or providing access to money or sensitive information.

Adaptation of a Lexical Organization for Social Engineering Detection and Response Generation

no code implementations LREC 2020 Archna Bhatia, Adam Dalton, Brodie Mather, Sashank Santhanam, Samira Shaikh, Alan Zemel, Tomek Strzalkowski, Bonnie J. Dorr

We present a paradigm for extensible lexicon development based on Lexical Conceptual Structure to support social engineering detection and response generation.

Response Generation

Understanding the Impact of Experiment Design for Evaluating Dialogue System Output

no code implementations WS 2020 Sashank Santhanam, Samira Shaikh

Evaluation of output from natural language generation (NLG) systems is typically conducted via crowdsourced human judgments.

Text Generation

Local Knowledge Powered Conversational Agents

1 code implementation20 Oct 2020 Sashank Santhanam, Wei Ping, Raul Puri, Mohammad Shoeybi, Mostofa Patwary, Bryan Catanzaro

State-of-the-art conversational agents have advanced significantly in conjunction with the use of large transformer-based language models.

Informativeness

Rome was built in 1776: A Case Study on Factual Correctness in Knowledge-Grounded Response Generation

1 code implementation11 Oct 2021 Sashank Santhanam, Behnam Hedayatnia, Spandana Gella, Aishwarya Padmakumar, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur

We demonstrate the benefit of our Conv-FEVER dataset by showing that the models trained on this data perform reasonably well to detect factually inconsistent responses with respect to the provided knowledge through evaluation on our human annotated data.

Response Generation

Twenty Years of Confusion in Human Evaluation: NLG Needs Evaluation Sheets and Standardised Definitions

no code implementations INLG (ACL) 2020 David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, Verena Rieser

Human assessment remains the most trusted form of evaluation in NLG, but highly diverse approaches and a proliferation of different quality criteria used by researchers make it difficult to compare results and draw conclusions across papers, with adverse implications for meta-evaluation and reproducibility.

Experimental Design

Cannot find the paper you are looking for? You can Submit a new open access paper.