no code implementations • EMNLP 2021 • Arjun Akula, Spandana Gella, Keze Wang, Song-Chun Zhu, Siva Reddy
Our model outperforms the state-of-the-art NMN model on CLEVR-Ref+ dataset with +8. 1% improvement in accuracy on the single-referent test set and +4. 3% on the full test set.
1 code implementation • 22 Apr 2022 • Nouha Dziri, Ehsan Kamalloo, Sivan Milton, Osmar Zaiane, Mo Yu, Edoardo M. Ponti, Siva Reddy
To mitigate this behavior, we adopt a data-centric solution and create FaithDial, a new benchmark for hallucination-free dialogues, by editing hallucinated responses in the Wizard of Wikipedia (WoW) benchmark.
1 code implementation • 17 Apr 2022 • Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, Siva Reddy
Knowledge-grounded conversational models are known to suffer from producing factually invalid statements, a phenomenon commonly called hallucination.
no code implementations • Findings (ACL) 2022 • Zichao Li, Prakhar Sharma, Xing Han Lu, Jackie C. K. Cheung, Siva Reddy
We train a neural model with this feedback data that can generate explanations and re-score answer candidates.
1 code implementation • ACL 2022 • Benno Krojer, Vaibhav Adlakha, Vibhav Vineet, Yash Goyal, Edoardo Ponti, Siva Reddy
In particular, models are tasked with retrieving the correct image from a set of 10 minimally contrastive candidates based on a contextual description.
1 code implementation • 28 Feb 2022 • Edoardo M. Ponti, Alessandro Sordoni, Yoshua Bengio, Siva Reddy
By jointly learning these and a task-skill allocation matrix, the network for each task is instantiated as the average of the parameters of active skills.
2 code implementations • 27 Jan 2022 • Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, Ivan Vulić
Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups.
Cross-Lingual Visual Natural Language Inference
Cross-Modal Retrieval
+15
no code implementations • 5 Jan 2022 • Nicolas Gontier, Siva Reddy, Christopher Pal
We study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on four NLP tasks requiring different forms of logical reasoning: (1) compositional language understanding with text-based relational reasoning (CLUTRR), (2) abductive reasoning (ProofWriter), (3) multi-hop question answering (HotpotQA), and (4) conversational question answering (CoQA).
Conversational Question Answering
Multi-hop Question Answering
+1
2 code implementations • ACL 2022 • Nicholas Meade, Elinor Poole-Dayan, Siva Reddy
Recent work has shown pre-trained language models capture social biases from the large amounts of text they are trained on.
no code implementations • ACL 2022 • Nathan Schucher, Siva Reddy, Harm de Vries
Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks.
1 code implementation • 15 Oct 2021 • Andreas Madsen, Nicholas Meade, Vaibhav Adlakha, Siva Reddy
In this work, we adapt and improve a recently proposed faithfulness benchmark from computer vision called ROAR (RemOve And Retrain), by Hooker et al. (2019).
no code implementations • ACL 2022 • Emily Goodwin, Siva Reddy, Timothy J. O'Donnell, Dzmitry Bahdanau
To test compositional generalization in semantic parsing, Keysers et al. (2020) introduced Compositional Freebase Queries (CFQ).
1 code implementation • 2 Oct 2021 • Vaibhav Adlakha, Shehzaad Dhuliawala, Kaheer Suleman, Harm de Vries, Siva Reddy
On average, a conversation in our dataset spans 13 question-answer turns and involves four topics (documents).
2 code implementations • EMNLP 2021 • Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, Desmond Elliott
The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and images of ImageNet.
Ranked #1 on
Zero-Shot Cross-Lingual Transfer
on MaRVL
Zero-Shot Cross-Lingual Transfer
Zero-Shot Cross-Lingual Visual Reasoning
no code implementations • 10 Aug 2021 • Andreas Madsen, Siva Reddy, Sarath Chandar
Neural networks for NLP are becoming increasingly complex and widespread, and there is a growing concern if these models are responsible to use.
1 code implementation • 23 Jul 2021 • Edoardo Maria Ponti, Julia Kreutzer, Ivan Vulić, Siva Reddy
To remedy this, we propose a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model, by treating the intermediate translations as a latent random variable.
1 code implementation • AKBC 2021 • Meiqi Guo, Mingda Zhang, Siva Reddy, Malihe Alikhani
We introduce Abg-CoQA, a novel dataset for clarifying ambiguity in Conversational Question Answering systems.
1 code implementation • NeurIPS 2021 • Devendra Singh Sachan, Siva Reddy, William Hamilton, Chris Dyer, Dani Yogatama
We model retrieval decisions as latent variables over sets of relevant documents.
1 code implementation • 2 Jun 2021 • Edoardo Maria Ponti, Rahul Aralikatte, Disha Shrivastava, Siva Reddy, Anders Søgaard
In fact, under a decision-theoretic framework, MAML can be interpreted as minimising the expected risk across training languages (with a uniform prior), which is known as Bayes criterion.
1 code implementation • NAACL 2021 • Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R Devon Hjelm, Alessandro Sordoni, Aaron Courville
To improve language models in this regard, we propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus.
1 code implementation • EMNLP 2021 • Devang Kulshreshtha, Robert Belfer, Iulian Vlad Serban, Siva Reddy
We run UDA experiments on question generation and passage retrieval from the \textit{Natural Questions} domain to machine learning and biomedical domains.
1 code implementation • EMNLP (ClinicalNLP) 2020 • Zhi Wen, Xing Han Lu, Siva Reddy
One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets.
Ranked #1 on
Mortality Prediction
on MIMIC-III
(Accuracy metric)
no code implementations • NAACL 2021 • Yikang Shen, Shawn Tan, Alessandro Sordoni, Siva Reddy, Aaron Courville
In the present work, we propose a new syntax-aware language model: Syntactic Ordered Memory (SOM).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Makesh Narsimhan Sreedhar, Kun Ni, Siva Reddy
The ubiquitous nature of chatbots and their interaction with users generate an enormous amount of data.
2 code implementations • NeurIPS 2020 • Nicolas Gontier, Koustuv Sinha, Siva Reddy, Christopher Pal
We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs.
1 code implementation • ACL 2020 • Arjun R. Akula, Spandana Gella, Yaser Al-Onaizan, Song-Chun Zhu, Siva Reddy
To measure the true progress of existing models, we split the test set into two sets, one which requires reasoning on linguistic structure and the other which doesn't.
2 code implementations • ACL 2021 • Moin Nadeem, Anna Bethke, Siva Reddy
Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases.
Ranked #1 on
Bias Detection
on StereoSet
no code implementations • 25 Dec 2018 • Jianpeng Cheng, Siva Reddy, Mirella Lapata
We address these challenges with a framework which allows to elicit training data from a domain ontology and bootstrap a neural parser which recursively builds derivations of logical forms.
3 code implementations • TACL 2019 • Siva Reddy, Danqi Chen, Christopher D. Manning
Humans gather information by engaging in conversations involving a series of interconnected questions and answers.
Ranked #3 on
Generative Question Answering
on CoQA
Conversational Question Answering
Generative Question Answering
+1
1 code implementation • TACL 2018 • Mohammad Javad Hosseini, Nathanael Chambers, Siva Reddy, Xavier R. Holt, Shay B. Cohen, Mark Johnson, Mark Steedman
We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph.
no code implementations • CL 2019 • Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata
This paper describes a neural semantic parser that maps natural language utterances onto logical forms which can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response.
no code implementations • EMNLP 2017 • Li Dong, Jonathan Mallinson, Siva Reddy, Mirella Lapata
Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need.
no code implementations • CONLL 2017 • Daniel Zeman, Martin Popel, Milan Straka, Jan Haji{\v{c}}, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkov{\'a}, Jan Haji{\v{c}} jr., Jaroslava Hlav{\'a}{\v{c}}ov{\'a}, V{\'a}clava Kettnerov{\'a}, Zde{\v{n}}ka Ure{\v{s}}ov{\'a}, Jenna Kanerva, Stina Ojala, Anna Missil{\"a}, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, H{\'e}ctor Mart{\'\i}nez Alonso, {\c{C}}a{\u{g}}r{\i} {\c{C}}{\"o}ltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, M, Michael l, Jesse Kirchner, Hector Fern Alcalde, ez, Jana Strnadov{\'a}, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendon{\c{c}}a, L, Tatiana o, Rattima Nitisaroj, Josie Li
The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets.
1 code implementation • ACL 2017 • Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata
We introduce a neural semantic parser that converts natural language utterances to intermediate representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains.
no code implementations • ACL 2017 • Rajarshi Das, Manzil Zaheer, Siva Reddy, Andrew McCallum
Existing question answering methods infer answers either from a knowledge base or from raw text.
no code implementations • WS 2017 • Federico Fancellu, Siva Reddy, Adam Lopez, Bonnie Webber
Many language technology applications would benefit from the ability to represent negation and its scope on top of widely-used linguistic resources.
1 code implementation • EMNLP 2017 • Siva Reddy, Oscar Täckström, Slav Petrov, Mark Steedman, Mirella Lapata
In this work, we introduce UDepLambda, a semantic interface for UD, which maps natural language to logical forms in an almost language-independent fashion and can process dependency graphs.
1 code implementation • 10 Feb 2017 • Federico Fancellu, Siva Reddy, Adam Lopez, Bonnie Webber
Many language technology applications would benefit from the ability to represent negation and its scope on top of widely-used linguistic resources.
no code implementations • WS 2017 • Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, Alexandra Birch
Our results on WMT data show that explicitly modeling target-syntax improves machine translation quality for German->English, a high-resource pair, and for Romanian->English, a low-resource pair and also several syntactic phenomena including prepositional phrase attachment.
1 code implementation • EMNLP 2016 • Yonatan Bisk, Siva Reddy, John Blitzer, Julia Hockenmaier, Mark Steedman
We compare the effectiveness of four different syntactic CCG parsers for a semantic slot-filling task to explore how much syntactic supervision is required for downstream semantic analysis.
no code implementations • 18 Aug 2016 • Srikanth Ronanki, Siva Reddy, Bajibabu Bollepalli, Simon King
These methods first convert the ASCII text to a phonetic script, and then learn a Deep Neural Network to synthesize speech from that.
1 code implementation • ACL 2016 • Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, Dongyan Zhao
Existing knowledge-based question answering systems often rely on small annotated training data.
no code implementations • WS 2016 • Shashi Narayan, Siva Reddy, Shay B. Cohen
One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer.
1 code implementation • TACL 2016 • Siva Reddy, Oscar T{\"a}ckstr{\"o}m, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, Mirella Lapata
In contrast{---}partly due to the lack of a strong type system{---}dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages.
no code implementations • TACL 2014 • Siva Reddy, Mirella Lapata, Mark Steedman
In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs.
no code implementations • LREC 2012 • Bharat Ram Ambati, Siva Reddy, Adam Kilgarriff
Word sketches are one-page, automatic, corpus-based summaries of a word's grammatical and collocational behaviour.