Search Results for author: Siva Reddy

Found 69 papers, 44 papers with code

Mind the Context: The Impact of Contextualization in Neural Module Networks for Grounding Visual Referring Expressions

no code implementations EMNLP 2021 Arjun Akula, Spandana Gella, Keze Wang, Song-Chun Zhu, Siva Reddy

Our model outperforms the state-of-the-art NMN model on CLEVR-Ref+ dataset with +8. 1% improvement in accuracy on the single-referent test set and +4. 3% on the full test set.

A Compositional Typed Semantics for Universal Dependencies

1 code implementation2 Mar 2024 Laurestine Bradford, Timothy John O'Donnell, Siva Reddy

We introduce UD Type Calculus, a compositional, principled, and language-independent system of semantic types and logical forms for lexical items which builds on a widely-used language-general dependency syntax framework.

Sentence

When does word order matter and when doesn't it?

1 code implementation29 Feb 2024 Xuanda Chen, Timothy O'Donnell, Siva Reddy

Our results show the effect that the less informative word order is, the more consistent the model's predictions are between unscrambled and scrambled sentences.

Natural Language Understanding RTE +1

WebLINX: Real-World Website Navigation with Multi-Turn Dialogue

1 code implementation8 Feb 2024 Xing Han Lù, Zdeněk Kasner, Siva Reddy

We propose the problem of conversational web navigation, where a digital agent controls a web browser and follows user instructions to solve real-world tasks in a multi-turn dialogue fashion.

Conversational Web Navigation Text Generation +1

Are self-explanations from Large Language Models faithful?

1 code implementation15 Jan 2024 Andreas Madsen, Sarath Chandar, Siva Reddy

For example, if an LLM says a set of words is important for making a prediction, then it should not be able to make its prediction without these words.

counterfactual Faithfulness Critic +4

Evaluating In-Context Learning of Libraries for Code Generation

no code implementations16 Nov 2023 Arkil Patel, Siva Reddy, Dzmitry Bahdanau, Pradeep Dasigi

Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability.

Code Generation In-Context Learning

MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations

1 code implementation18 Oct 2023 Arkil Patel, Satwik Bhattamishra, Siva Reddy, Dzmitry Bahdanau

Additionally, our analysis uncovers the semantic predispositions in LLMs and reveals the impact of recency bias for information presented in long contexts.

In-Context Learning Semantic Parsing +1

Faithfulness Measurable Masked Language Models

1 code implementation11 Oct 2023 Andreas Madsen, Siva Reddy, Sarath Chandar

This is achieved by using a novel fine-tuning method that incorporates masking, such that masking tokens become in-distribution by design.

In-Context Learning for Text Classification with Many Labels

no code implementations19 Sep 2023 Aristides Milios, Siva Reddy, Dzmitry Bahdanau

We analyze the performance across number of in-context examples and different model scales, showing that larger models are necessary to effectively and consistently make use of larger context lengths for ICL.

In-Context Learning intent-classification +6

Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering

1 code implementation31 Jul 2023 Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy

Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness.

Instruction Following Question Answering

The Impact of Positional Encoding on Length Generalization in Transformers

2 code implementations NeurIPS 2023 Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, Siva Reddy

In this paper, we conduct a systematic empirical study comparing the length generalization performance of decoder-only Transformers with five different position encoding approaches including Absolute Position Embedding (APE), T5's Relative PE, ALiBi, and Rotary, in addition to Transformers without positional encoding (NoPE).

Position

The StatCan Dialogue Dataset: Retrieving Data Tables through Conversations with Genuine Intents

1 code implementation3 Apr 2023 Xing Han Lu, Siva Reddy, Harm de Vries

We introduce the StatCan Dialogue Dataset consisting of 19, 379 conversation turns between agents working at Statistics Canada and online users looking for published data tables.

Dialogue Generation Table Retrieval

Using In-Context Learning to Improve Dialogue Safety

no code implementations2 Feb 2023 Nicholas Meade, Spandana Gella, Devamanyu Hazarika, Prakhar Gupta, Di Jin, Siva Reddy, Yang Liu, Dilek Hakkani-Tür

For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4. 04% more than our approach.

In-Context Learning Re-Ranking +1

Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model

1 code implementation18 Dec 2022 Parishad BehnamGhader, Santiago Miret, Siva Reddy

Our findings indicate that the simple similarity metric employed by retrievers is insufficient for retrieving all the necessary statements for reasoning.

Language Modelling Question Answering +1

Syntactic Substitutability as Unsupervised Dependency Syntax

1 code implementation29 Nov 2022 Jasper Jian, Siva Reddy

Syntax is a latent hierarchical structure which underpins the robust and compositional nature of human language.

Dependency Parsing Language Modelling

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

3 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

Few-shot Question Generation for Personalized Feedback in Intelligent Tutoring Systems

no code implementations8 Jun 2022 Devang Kulshreshtha, Muhammad Shayan, Robert Belfer, Siva Reddy, Iulian Vlad Serban, Ekaterina Kochmar

Our personalized feedback can pinpoint correct and incorrect or missing phrases in student answers as well as guide them towards correct answer by asking a question in natural language.

Generative Question Answering Question Generation +3

FaithDial: A Faithful Benchmark for Information-Seeking Dialogue

1 code implementation22 Apr 2022 Nouha Dziri, Ehsan Kamalloo, Sivan Milton, Osmar Zaiane, Mo Yu, Edoardo M. Ponti, Siva Reddy

The goal of information-seeking dialogue is to respond to seeker queries with natural language utterances that are grounded on knowledge sources.

Dialogue Generation Hallucination

On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?

1 code implementation NAACL 2022 Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, Siva Reddy

Knowledge-grounded conversational models are known to suffer from producing factually invalid statements, a phenomenon commonly called hallucination.

Hallucination

Image Retrieval from Contextual Descriptions

1 code implementation ACL 2022 Benno Krojer, Vaibhav Adlakha, Vibhav Vineet, Yash Goyal, Edoardo Ponti, Siva Reddy

In particular, models are tasked with retrieving the correct image from a set of 10 minimally contrastive candidates based on a contextual description.

Image Retrieval Retrieval

Combining Modular Skills in Multitask Learning

1 code implementation28 Feb 2022 Edoardo M. Ponti, Alessandro Sordoni, Yoshua Bengio, Siva Reddy

By jointly learning these and a task-skill allocation matrix, the network for each task is instantiated as the average of the parameters of active skills.

Instruction Following reinforcement-learning +1

IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages

3 code implementations27 Jan 2022 Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, Ivan Vulić

Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups.

Cross-Modal Retrieval Few-Shot Learning +5

Does Entity Abstraction Help Generative Transformers Reason?

no code implementations5 Jan 2022 Nicolas Gontier, Siva Reddy, Christopher Pal

We study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on four NLP tasks requiring different forms of logical reasoning: (1) compositional language understanding with text-based relational reasoning (CLUTRR), (2) abductive reasoning (ProofWriter), (3) multi-hop question answering (HotpotQA), and (4) conversational question answering (CoQA).

Conversational Question Answering Logical Reasoning +2

The Power of Prompt Tuning for Low-Resource Semantic Parsing

no code implementations ACL 2022 Nathan Schucher, Siva Reddy, Harm de Vries

Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks.

Semantic Parsing

Compositional Generalization in Dependency Parsing

no code implementations ACL 2022 Emily Goodwin, Siva Reddy, Timothy J. O'Donnell, Dzmitry Bahdanau

To test compositional generalization in semantic parsing, Keysers et al. (2020) introduced Compositional Freebase Queries (CFQ).

Dependency Parsing Semantic Parsing

Visually Grounded Reasoning across Languages and Cultures

3 code implementations EMNLP 2021 Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, Desmond Elliott

The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and images of ImageNet.

Visual Reasoning Zero-Shot Learning

Post-hoc Interpretability for Neural NLP: A Survey

no code implementations10 Aug 2021 Andreas Madsen, Siva Reddy, Sarath Chandar

Neural networks for NLP are becoming increasingly complex and widespread, and there is a growing concern if these models are responsible to use.

Modelling Latent Translations for Cross-Lingual Transfer

1 code implementation23 Jul 2021 Edoardo Maria Ponti, Julia Kreutzer, Ivan Vulić, Siva Reddy

To remedy this, we propose a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model, by treating the intermediate translations as a latent random variable.

Cross-Lingual Transfer Few-Shot Learning +5

Minimax and Neyman-Pearson Meta-Learning for Outlier Languages

1 code implementation2 Jun 2021 Edoardo Maria Ponti, Rahul Aralikatte, Disha Shrivastava, Siva Reddy, Anders Søgaard

In fact, under a decision-theoretic framework, MAML can be interpreted as minimising the expected risk across training languages (with a uniform prior), which is known as Bayes criterion.

Meta-Learning Part-Of-Speech Tagging +1

Understanding by Understanding Not: Modeling Negation in Language Models

1 code implementation NAACL 2021 Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R Devon Hjelm, Alessandro Sordoni, Aaron Courville

To improve language models in this regard, we propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus.

Language Modelling Negation

MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining

1 code implementation EMNLP (ClinicalNLP) 2020 Zhi Wen, Xing Han Lu, Siva Reddy

One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets.

 Ranked #1 on Mortality Prediction on MIMIC-III (Accuracy metric)

Mortality Prediction Natural Language Understanding

Words aren't enough, their order matters: On the Robustness of Grounding Visual Referring Expressions

1 code implementation ACL 2020 Arjun R. Akula, Spandana Gella, Yaser Al-Onaizan, Song-Chun Zhu, Siva Reddy

To measure the true progress of existing models, we split the test set into two sets, one which requires reasoning on linguistic structure and the other which doesn't.

Contrastive Learning Multi-Task Learning +2

StereoSet: Measuring stereotypical bias in pretrained language models

2 code implementations ACL 2021 Moin Nadeem, Anna Bethke, Siva Reddy

Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases.

Bias Detection Math

Building a Neural Semantic Parser from a Domain Ontology

no code implementations25 Dec 2018 Jianpeng Cheng, Siva Reddy, Mirella Lapata

We address these challenges with a framework which allows to elicit training data from a domain ontology and bootstrap a neural parser which recursively builds derivations of logical forms.

Semantic Parsing

Learning Typed Entailment Graphs with Global Soft Constraints

1 code implementation TACL 2018 Mohammad Javad Hosseini, Nathanael Chambers, Siva Reddy, Xavier R. Holt, Shay B. Cohen, Mark Johnson, Mark Steedman

We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph.

Graph Learning

Learning an Executable Neural Semantic Parser

no code implementations CL 2019 Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata

This paper describes a neural semantic parser that maps natural language utterances onto logical forms which can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response.

Learning to Paraphrase for Question Answering

no code implementations EMNLP 2017 Li Dong, Jonathan Mallinson, Siva Reddy, Mirella Lapata

Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need.

Question Answering Sentence

Learning Structured Natural Language Representations for Semantic Parsing

1 code implementation ACL 2017 Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata

We introduce a neural semantic parser that converts natural language utterances to intermediate representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains.

Semantic Parsing

Universal Dependencies to Logical Form with Negation Scope

no code implementations WS 2017 Federico Fancellu, Siva Reddy, Adam Lopez, Bonnie Webber

Many language technology applications would benefit from the ability to represent negation and its scope on top of widely-used linguistic resources.

Negation

Universal Semantic Parsing

1 code implementation EMNLP 2017 Siva Reddy, Oscar Täckström, Slav Petrov, Mark Steedman, Mirella Lapata

In this work, we introduce UDepLambda, a semantic interface for UD, which maps natural language to logical forms in an almost language-independent fashion and can process dependency graphs.

Question Answering Semantic Parsing

Universal Dependencies to Logical Forms with Negation Scope

1 code implementation10 Feb 2017 Federico Fancellu, Siva Reddy, Adam Lopez, Bonnie Webber

Many language technology applications would benefit from the ability to represent negation and its scope on top of widely-used linguistic resources.

Negation

Predicting Target Language CCG Supertags Improves Neural Machine Translation

no code implementations WS 2017 Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, Alexandra Birch

Our results on WMT data show that explicitly modeling target-syntax improves machine translation quality for German->English, a high-resource pair, and for Romanian->English, a low-resource pair and also several syntactic phenomena including prepositional phrase attachment.

Machine Translation NMT +2

Evaluating Induced CCG Parsers on Grounded Semantic Parsing

1 code implementation EMNLP 2016 Yonatan Bisk, Siva Reddy, John Blitzer, Julia Hockenmaier, Mark Steedman

We compare the effectiveness of four different syntactic CCG parsers for a semantic slot-filling task to explore how much syntactic supervision is required for downstream semantic analysis.

Semantic Parsing slot-filling +1

DNN-based Speech Synthesis for Indian Languages from ASCII text

no code implementations18 Aug 2016 Srikanth Ronanki, Siva Reddy, Bajibabu Bollepalli, Simon King

These methods first convert the ASCII text to a phonetic script, and then learn a Deep Neural Network to synthesize speech from that.

Speech Synthesis Text-To-Speech Synthesis

Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing

no code implementations WS 2016 Shashi Narayan, Siva Reddy, Shay B. Cohen

One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer.

Open-Domain Question Answering Paraphrase Generation +2

Transforming Dependency Structures to Logical Forms for Semantic Parsing

1 code implementation TACL 2016 Siva Reddy, Oscar T{\"a}ckstr{\"o}m, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, Mirella Lapata

In contrast{---}partly due to the lack of a strong type system{---}dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages.

Question Answering Semantic Parsing +1

Large-scale Semantic Parsing without Question-Answer Pairs

no code implementations TACL 2014 Siva Reddy, Mirella Lapata, Mark Steedman

In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs.

Graph Matching Semantic Parsing

Word Sketches for Turkish

no code implementations LREC 2012 Bharat Ram Ambati, Siva Reddy, Adam Kilgarriff

Word sketches are one-page, automatic, corpus-based summaries of a word's grammatical and collocational behaviour.

Dependency Parsing Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.