Search Results for author: Mohit Iyyer

Found 81 papers, 55 papers with code

How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?

no code implementations insights (ACL) 2022 Simeng Sun, Brian Dillon, Mohit Iyyer

Recent progress in large pretrained language models (LMs) has led to a growth of analyses examining what kinds of linguistic knowledge are encoded by these models.

IGA: An Intent-Guided Authoring Assistant

no code implementations EMNLP 2021 Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer

While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored.

Language Modelling Sentence

Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders

no code implementations EMNLP 2020 Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O{'}Gorman, Mohit Iyyer, Andrew McCallum

The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*.

Constituency Grammar Induction Sentence

FABLES: Evaluating faithfulness and content selection in book-length summarization

3 code implementations1 Apr 2024 Yekyung Kim, Yapei Chang, Marzena Karpinska, Aparna Garimella, Varun Manjunatha, Kyle Lo, Tanya Goyal, Mohit Iyyer

While LLM-based auto-raters have proven reliable for factuality and coherence in other settings, we implement several LLM raters of faithfulness and find that none correlates strongly with human annotations, especially with regard to detecting unfaithful claims.

Long-Context Understanding

GEE! Grammar Error Explanation with Large Language Models

1 code implementation16 Nov 2023 Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Kevin Gimpel, Mohit Iyyer

To address this gap, we propose the task of grammar error explanation, where a system needs to provide one-sentence explanations for each grammatical error in a pair of erroneous and corrected sentences.

Grammatical Error Correction Sentence

Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation

no code implementations15 Nov 2023 Jiachen Zhao, Wenlong Zhao, Andrew Drozdov, Benjamin Rozonoyer, Md Arafat Sultan, Jay-Yoon Lee, Mohit Iyyer, Andrew McCallum

In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks.

Constituency Parsing Knowledge Distillation +3

PaRaDe: Passage Ranking using Demonstrations with Large Language Models

no code implementations22 Oct 2023 Andrew Drozdov, Honglei Zhuang, Zhuyun Dai, Zhen Qin, Razieh Rahimi, Xuanhui Wang, Dana Alon, Mohit Iyyer, Andrew McCallum, Donald Metzler, Kai Hui

Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance.

Passage Ranking Passage Re-Ranking +6

FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation

1 code implementation5 Oct 2023 Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong

Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked.

Hallucination World Knowledge

BooookScore: A systematic exploration of book-length summarization in the era of LLMs

2 code implementations1 Oct 2023 Yapei Chang, Kyle Lo, Tanya Goyal, Mohit Iyyer

We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models.

Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF

1 code implementation16 Sep 2023 Simeng Sun, Dhawal Gupta, Mohit Iyyer

During the last stage of RLHF, a large language model is aligned to human intents via PPO training, a process that generally requires large-scale computational resources.

Language Modelling Large Language Model

A Critical Evaluation of Evaluations for Long-form Question Answering

1 code implementation29 May 2023 Fangyuan Xu, Yixiao Song, Mohit Iyyer, Eunsol Choi

We present a careful analysis of experts' evaluation, which focuses on new aspects such as the comprehensiveness of the answer.

Long Form Question Answering Text Generation

KNN-LM Does Not Improve Open-ended Text Generation

no code implementations24 May 2023 Shufan Wang, Yixiao Song, Andrew Drozdov, Aparna Garimella, Varun Manjunatha, Mohit Iyyer

Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to a baseline Transformer LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation.

Retrieval Text Generation

PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents

1 code implementation23 May 2023 Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer

PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance.

FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation

4 code implementations23 May 2023 Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi

Evaluating the factuality of long-form text generated by large language models (LMs) is non-trivial because (1) generations often contain a mixture of supported and unsupported pieces of information, making binary judgments of quality inadequate, and (2) human evaluation is time-consuming and costly.

Language Modelling Retrieval +1

Large language models effectively leverage document-level context for literary translation, but critical errors persist

1 code implementation6 Apr 2023 Marzena Karpinska, Mohit Iyyer

Large language models (LLMs) are competitive with the state of the art on a wide range of sentence-level translation datasets.

Sentence Translation

Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense

1 code implementation NeurIPS 2023 Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, Mohit Iyyer

To increase the robustness of AI-generated text detection to paraphrase attacks, we introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider.

Language Modelling Outlier Detection +3

Stealing the Decoding Algorithms of Language Models

1 code implementation8 Mar 2023 Ali Naseh, Kalpesh Krishna, Mohit Iyyer, Amir Houmansadr

A key component of generating text from modern language models (LM) is the selection and tuning of decoding algorithms.

Text Generation

How Does In-Context Learning Help Prompt Tuning?

no code implementations22 Feb 2023 Simeng Sun, Yang Liu, Dan Iter, Chenguang Zhu, Mohit Iyyer

This motivates the use of parameter-efficient adaptation methods such as prompt tuning (PT), which adds a small number of tunable embeddings to an otherwise frozen model, and in-context learning (ICL), in which demonstrations of the task are provided to the model in natural language without any additional training.

In-Context Learning Text Generation

LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization

1 code implementation30 Jan 2023 Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, Kyle Lo

Motivated by our survey, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores?

You can't pick your neighbors, or can you? When and how to rely on retrieval in the $k$NN-LM

1 code implementation28 Oct 2022 Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer

Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs.

Language Modelling Retrieval +2

DEMETR: Diagnosing Evaluation Metrics for Translation

1 code implementation25 Oct 2022 Marzena Karpinska, Nishant Raj, Katherine Thai, Yixiao Song, Ankita Gupta, Mohit Iyyer

While machine translation evaluation metrics based on string overlap (e. g., BLEU) have their limitations, their computations are transparent: the BLEU score assigned to a particular candidate translation can be traced back to the presence or absence of certain words.

Machine Translation Translation

Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature

1 code implementation25 Oct 2022 Katherine Thai, Marzena Karpinska, Kalpesh Krishna, Bill Ray, Moira Inghilleri, John Wieting, Mohit Iyyer

Using Par3, we discover that expert literary translators prefer reference human translations over machine-translated paragraphs at a rate of 84%, while state-of-the-art automatic MT metrics do not correlate with those preferences.

Machine Translation Translation

SLING: Sino Linguistic Evaluation of Large Language Models

1 code implementation21 Oct 2022 Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Mohit Iyyer

To understand what kinds of linguistic knowledge are encoded by pretrained Chinese language models (LMs), we introduce the benchmark of Sino LINGuistics (SLING), which consists of 38K minimal sentence pairs in Mandarin Chinese grouped into 9 high-level linguistic phenomena.

Sentence

Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation

1 code implementation25 May 2022 Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant

In this paper, we explore the challenging problem of performing a generative task in a target language when labeled data is only available in English, using summarization as a case study.

Cross-Lingual Transfer Machine Translation +1

RankGen: Improving Text Generation with Large Ranking Models

1 code implementation19 May 2022 Kalpesh Krishna, Yapei Chang, John Wieting, Mohit Iyyer

Given an input sequence (or prefix), modern language models often assign high probabilities to output sequences that are repetitive, incoherent, or irrelevant to the prefix; as such, model-generated text also contains such artifacts.

Contrastive Learning Language Modelling +2

Modeling Exemplification in Long-form Question Answering via Retrieval

no code implementations NAACL 2022 Shufan Wang, Fangyuan Xu, Laure Thompson, Eunsol Choi, Mohit Iyyer

We show that not only do state-of-the-art LFQA models struggle to generate relevant examples, but also that standard evaluation metrics such as ROUGE are insufficient to judge exemplification quality.

Long Form Question Answering Retrieval

ChapterBreak: A Challenge Dataset for Long-Range Language Models

1 code implementation NAACL 2022 Simeng Sun, Katherine Thai, Mohit Iyyer

While numerous architectures for long-range language models (LRLMs) have recently been proposed, a meaningful evaluation of their discourse-level language understanding capabilities has not yet followed.

RELIC: Retrieving Evidence for Literary Claims

1 code implementation ACL 2022 Katherine Thai, Yapei Chang, Kalpesh Krishna, Mohit Iyyer

Humanities scholars commonly provide evidence for claims that they make about a work of literature (e. g., a novel) in the form of quotations from the work.

Information Retrieval Retrieval +2

Do Long-Range Language Models Actually Use Long-Range Context?

no code implementations EMNLP 2021 Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, Mohit Iyyer

Language models are generally trained on short, truncated input sequences, which limits their ability to use discourse-level information present in long-range context to improve their predictions.

2k 8k +1

The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation

no code implementations EMNLP 2021 Marzena Karpinska, Nader Akoury, Mohit Iyyer

Recent text generation research has increasingly focused on open-ended domains such as story and poetry generation.

Text Generation

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

1 code implementation EMNLP 2021 Tu Vu, Minh-Thang Luong, Quoc V. Le, Grady Simon, Mohit Iyyer

Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language models do not perform as well in few-shot settings where only a handful of training examples are available.

Few-Shot Learning Few-Shot NLI +1

Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration

1 code implementation EMNLP 2021 Shufan Wang, Laure Thompson, Mohit Iyyer

Phrase representations derived from BERT often do not exhibit complex phrasal compositionality, as the model relies instead on lexical similarity to determine semantic relatedness.

Paraphrase Generation Topic Models

TABBIE: Pretrained Representations of Tabular Data

2 code implementations NAACL 2021 Hiroshi Iida, Dung Thai, Varun Manjunatha, Mohit Iyyer

Existing work on tabular representation learning jointly models tables and associated text using self-supervised objective functions derived from pretrained language models such as BERT.

 Ranked #1 on Column Type Annotation on VizNet-Sato-Full (Weighted-F1 metric)

Cell Detection Column Type Annotation +1

IGA : An Intent-Guided Authoring Assistant

1 code implementation14 Apr 2021 Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer

While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored.

Language Modelling Sentence

Revisiting Simple Neural Probabilistic Language Models

1 code implementation NAACL 2021 Simeng Sun, Mohit Iyyer

Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements.

Language Modelling Word Embeddings

Changing the Mind of Transformers for Topically-Controllable Language Generation

1 code implementation EACL 2021 Haw-Shiuan Chang, Jiaming Yuan, Mohit Iyyer, Andrew McCallum

Our framework consists of two components: (1) a method that produces a set of candidate topics by predicting the centers of word clusters in the possible continuations, and (2) a text generation model whose output adheres to the chosen topics.

Clustering Text Generation

Hurdles to Progress in Long-form Question Answering

2 code implementations NAACL 2021 Kalpesh Krishna, Aurko Roy, Mohit Iyyer

The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer.

Long Form Question Answering Open-Domain Dialog +2

Analyzing Gender Bias within Narrative Tropes

1 code implementation EMNLP (NLP+CSS) 2020 Dhruvil Gala, Mohammad Omar Khursheed, Hannah Lerner, Brendan O'Connor, Mohit Iyyer

Popular media reflects and reinforces societal biases through the use of tropes, which are narrative elements, such as archetypal characters and plot arcs, that occur frequently across media.

Reformulating Unsupervised Style Transfer as Paraphrase Generation

1 code implementation EMNLP 2020 Kalpesh Krishna, John Wieting, Mohit Iyyer

Modern NLP defines the task of style transfer as modifying the style of a given sentence without appreciably changing its semantics, which implies that the outputs of style transfer systems should be paraphrases of their inputs.

Attribute Paraphrase Generation +2

Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models

1 code implementation ACL 2021 Sumanta Bhattacharyya, Amirmohammad Rooshenas, Subhajit Naskar, Simeng Sun, Mohit Iyyer, Andrew McCallum

To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i. e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR).

Computational Efficiency Machine Translation +4

Open-Retrieval Conversational Question Answering

1 code implementation22 May 2020 Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W. Bruce Croft, Mohit Iyyer

We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.

Conversational Question Answering Conversational Search +2

Hard-Coded Gaussian Attention for Neural Machine Translation

1 code implementation ACL 2020 Weiqiu You, Simeng Sun, Mohit Iyyer

Recent work has questioned the importance of the Transformer's multi-headed attention for achieving high translation quality.

Machine Translation Translation

Exploring and Predicting Transferability across NLP Tasks

1 code implementation EMNLP 2020 Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, Mohit Iyyer

We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task, and we validate their effectiveness in experiments controlled for source and target data size.

Language Modelling Part-Of-Speech Tagging +4

Thieves on Sesame Street! Model Extraction of BERT-based APIs

1 code implementation ICLR 2020 Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, Mohit Iyyer

We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model.

Language Modelling Model extraction +3

Investigating Sports Commentator Bias within a Large Corpus of American Football Broadcasts

1 code implementation IJCNLP 2019 Jack Merullo, Luke Yeh, Abram Handler, Alvin Grissom II, Brendan O'Connor, Mohit Iyyer

Sports broadcasters inject drama into play-by-play commentary by building team and player narratives through subjective analyses and anecdotes.

Attentive History Selection for Conversational Question Answering

2 code implementations26 Aug 2019 Chen Qu, Liu Yang, Minghui Qiu, Yongfeng Zhang, Cen Chen, W. Bruce Croft, Mohit Iyyer

First, we propose a positional history answer embedding method to encode conversation history with position information using BERT in a natural way.

Conversational Question Answering Conversational Search +2

Encouraging Paragraph Embeddings to Remember Sentence Identity Improves Classification

1 code implementation ACL 2019 Tu Vu, Mohit Iyyer

While paragraph embedding models are remarkably effective for downstream classification tasks, what they learn and encode into a single vector remains opaque.

Classification General Classification +1

Generating Question-Answer Hierarchies

2 code implementations ACL 2019 Kalpesh Krishna, Mohit Iyyer

The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002).

Language Modelling Reading Comprehension +2

Syntactically Supervised Transformers for Faster Neural Machine Translation

1 code implementation ACL 2019 Nader Akoury, Kalpesh Krishna, Mohit Iyyer

Standard decoders for neural machine translation autoregressively generate a single target token per time step, which slows inference especially for long outputs.

Machine Translation Translation

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders

1 code implementation NAACL 2019 Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum

We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Constituency Grammar Induction Sentence

BERT with History Answer Embedding for Conversational Question Answering

1 code implementation14 May 2019 Chen Qu, Liu Yang, Minghui Qiu, W. Bruce Croft, Yongfeng Zhang, Mohit Iyyer

One of the major challenges to multi-turn conversational search is to model the conversation history to answer the current question.

Conversational Question Answering Conversational Search +2

Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism

no code implementations NAACL 2019 Shufan Wang, Mohit Iyyer

Literary critics often attempt to uncover meaning in a single work of literature through careful reading and analysis.

Quizbowl: The Case for Incremental Question Answering

no code implementations9 Apr 2019 Pedro Rodriguez, Shi Feng, Mohit Iyyer, He He, Jordan Boyd-Graber

Throughout this paper, we show that collaborations with the vibrant trivia community have contributed to the quality of our dataset, spawned new research directions, and doubled as an exciting way to engage the public with research in machine learning and natural language processing.

BIG-bench Machine Learning Decision Making +1

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders

3 code implementations3 Apr 2019 Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum

We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Constituency Parsing Sentence

QuAC: Question Answering in Context

no code implementations EMNLP 2018 Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer

We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).

Question Answering Reading Comprehension

A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax

no code implementations27 Sep 2018 Fenfei Guo, Mohit Iyyer, Leah Findlater, Jordan Boyd-Graber

We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources.

Hard Attention Sentence +1

Revisiting the Importance of Encoding Logic Rules in Sentiment Classification

1 code implementation EMNLP 2018 Kalpesh Krishna, Preethi Jyothi, Mohit Iyyer

We analyze the performance of different sentiment classification models on syntactically complex inputs like A-but-B sentences.

Classification General Classification +2

QuAC : Question Answering in Context

no code implementations21 Aug 2018 Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer

We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).

Question Answering Reading Comprehension

Inducing and Embedding Senses with Scaled Gumbel Softmax

no code implementations22 Apr 2018 Fenfei Guo, Mohit Iyyer, Jordan Boyd-Graber

Methods for learning word sense embeddings represent a single word with multiple sense-specific vectors.

Pathologies of Neural Models Make Interpretations Difficult

no code implementations EMNLP 2018 Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber

In existing interpretation methods for NLP, a word's importance is determined by either input perturbation---measuring the decrease in model confidence when that word is removed---or by the gradient with respect to that word.

Sentence

Adversarial Example Generation with Syntactically Controlled Paraphrase Networks

2 code implementations NAACL 2018 Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer

We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples.

Sentence

Deep contextualized word representations

46 code implementations NAACL 2018 Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

Ranked #3 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (Wasserstein Distance (WD) metric, using extra training data)

Citation Intent Classification Conversational Response Selection +8

Search-based Neural Structured Learning for Sequential Question Answering

no code implementations ACL 2017 Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang

Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.

Question Answering Semantic Parsing

The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives

2 code implementations CVPR 2017 Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daumé III, Larry Davis

While computers can now describe what is explicitly depicted in natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels.

Answering Complicated Question Intents Expressed in Decomposed Question Sequences

no code implementations4 Nov 2016 Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang

Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.

Question Answering Semantic Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.