1 code implementation • 7 Dec 2023 • Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Michael Tontchev, Qing Hu, Brian Fuller, Davide Testuggine, Madian Khabsa
We introduce Llama Guard, an LLM-based input-output safeguard model geared towards Human-AI conversation use cases.
no code implementations • LREC 2020 • Scott Martin, Shivani Poddar, Kartikeya Upasani
This paper proposes a new dataset, MuDoCo, composed of authored dialogs between a fictional user and a system who are given tasks to perform within six task domains.
no code implementations • 3 Nov 2019 • Debanjan Ghosh, Elena Musi, Kartikeya Upasani, Smaranda Muresan
Human communication often involves the use of verbal irony or sarcasm, where the speakers usually mean the opposite of what they say.
no code implementations • WS 2019 • Kartikeya Upasani, David King, Jinfeng Rao, Anusha Balakrishnan, Michael White
We describe our exploratory system for the shallow surface realization task, which combines morphological inflection using character sequence-to-sequence models with a baseline linearizer that implements a tree-to-tree model using sequence-to-sequence models on serialized trees.
no code implementations • WS 2019 • Jinfeng Rao, Kartikeya Upasani, Anusha Balakrishnan, Michael White, Anuj Kumar, Rajen Subba
Generating fluent natural language responses from structured semantic representations is a critical step in task-oriented conversational systems.
1 code implementation • ACL 2019 • Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, Rajen Subba
Generating fluent natural language responses from structured semantic representations is a critical step in task-oriented conversational systems.
1 code implementation • NAACL 2019 • Ashwini Challa, Kartikeya Upasani, Anusha Balakrishnan, Rajen Subba
While acceptability includes grammatical correctness and semantic correctness, we focus only on grammaticality classification in this paper, and show that existing datasets for grammatical error correction don't correctly capture the distribution of errors that data-driven generators are likely to make.