no code implementations • EMNLP 2020 • Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O{'}Gorman, Mohit Iyyer, Andrew McCallum
The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*.
no code implementations • insights (ACL) 2022 • Simeng Sun, Brian Dillon, Mohit Iyyer
Recent progress in large pretrained language models (LMs) has led to a growth of analyses examining what kinds of linguistic knowledge are encoded by these models.
no code implementations • EMNLP 2021 • Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer
While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored.
1 code implementation • 4 Feb 2025 • Abhinav Kumar, Jaechul Roh, Ali Naseh, Marzena Karpinska, Mohit Iyyer, Amir Houmansadr, Eugene Bagdasarian
We evaluated our attack across closed-(OpenAI o1, o1-mini, o3-mini) and open-(DeepSeek R1) weights reasoning models on the FreshQA and SQuAD datasets.
1 code implementation • 26 Jan 2025 • Jenna Russell, Marzena Karpinska, Mohit Iyyer
In this paper, we study how well humans can detect text generated by commercial LLMs (GPT-4o, Claude, o1).
no code implementations • 11 Nov 2024 • Chaitanya Malaviya, Joseph Chee Chang, Dan Roth, Mohit Iyyer, Mark Yatskar, Kyle Lo
would depend on the user's preferences, and a good response to an open-ended query like "How do antibiotics work against bacteria?"
1 code implementation • 16 Jul 2024 • Rachneet Sachdeva, Yixiao Song, Mohit Iyyer, Iryna Gurevych
This work introduces HaluQuestQA, the first hallucination dataset with localized error annotations for human-written and model-generated LFQA answers.
no code implementations • 28 Jun 2024 • Garima Dhanania, Sheshera Mysore, Chau Minh Pham, Mohit Iyyer, Hamed Zamani, Andrew McCallum
EdTM models topic modeling as an assignment problem while leveraging LM/LLM based document-topic affinities and using optimal transport for making globally coherent topic-assignments.
1 code implementation • 27 Jun 2024 • Yixiao Song, Yekyung Kim, Mohit Iyyer
Existing metrics for evaluating the factuality of long-form text, such as FACTSCORE (Min et al., 2023) and SAFE (Wei et al., 2024), decompose an input text into "atomic claims" and verify each against a knowledge base like Wikipedia.
1 code implementation • 27 Jun 2024 • Chau Minh Pham, Simeng Sun, Mohit Iyyer
Existing research on instruction following largely focuses on tasks with simple instructions and short responses.
1 code implementation • 25 Jun 2024 • Shane Arora, Marzena Karpinska, Hung-Ting Chen, Ipsita Bhattacharjee, Mohit Iyyer, Eunsol Choi
To bridge this gap, we introduce CaLMQA, a collection of 1. 5K complex culturally specific questions spanning 23 languages and 51 culturally agnostic questions translated from English into 22 other languages.
1 code implementation • 24 Jun 2024 • Marzena Karpinska, Katherine Thai, Kyle Lo, Tanya Goyal, Mohit Iyyer
Synthetic long-context LLM benchmarks (e. g., "needle-in-the-haystack") test only surface-level retrieval capabilities, but how well can long-context LLMs retrieve, synthesize, and reason over information across book-length inputs?
1 code implementation • 20 Jun 2024 • Yapei Chang, Kalpesh Krishna, Amir Houmansadr, John Wieting, Mohit Iyyer
The most effective techniques to detect LLM-generated text rely on inserting a detectable signature -- or watermark -- during the model's decoding process.
no code implementations • 21 Apr 2024 • Ali Naseh, Katherine Thai, Mohit Iyyer, Amir Houmansadr
With the digital imagery landscape rapidly evolving, image stocks and AI-generated image marketplaces have become central to visual media.
3 code implementations • 1 Apr 2024 • Yekyung Kim, Yapei Chang, Marzena Karpinska, Aparna Garimella, Varun Manjunatha, Kyle Lo, Tanya Goyal, Mohit Iyyer
While LLM-based auto-raters have proven reliable for factuality and coherence in other settings, we implement several LLM raters of faithfulness and find that none correlates strongly with human annotations, especially with regard to detecting unfaithful claims.
1 code implementation • 16 Nov 2023 • Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Kevin Gimpel, Mohit Iyyer
To address this gap, we propose the task of grammar error explanation, where a system needs to provide one-sentence explanations for each grammatical error in a pair of erroneous and corrected sentences.
1 code implementation • 15 Nov 2023 • Jiachen Zhao, Wenlong Zhao, Andrew Drozdov, Benjamin Rozonoyer, Md Arafat Sultan, Jay-Yoon Lee, Mohit Iyyer, Andrew McCallum
In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks.
1 code implementation • 2 Nov 2023 • Chau Minh Pham, Alexander Hoyle, Simeng Sun, Philip Resnik, Mohit Iyyer
Topic modeling is a well-established technique for exploring text corpora.
no code implementations • 22 Oct 2023 • Andrew Drozdov, Honglei Zhuang, Zhuyun Dai, Zhen Qin, Razieh Rahimi, Xuanhui Wang, Dana Alon, Mohit Iyyer, Andrew McCallum, Donald Metzler, Kai Hui
Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance.
2 code implementations • 5 Oct 2023 • Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked.
2 code implementations • 1 Oct 2023 • Yapei Chang, Kyle Lo, Tanya Goyal, Mohit Iyyer
We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models.
1 code implementation • 16 Sep 2023 • Simeng Sun, Dhawal Gupta, Mohit Iyyer
During the last stage of RLHF, a large language model is aligned to human intents via PPO training, a process that generally requires large-scale computational resources.
1 code implementation • 29 May 2023 • Fangyuan Xu, Yixiao Song, Mohit Iyyer, Eunsol Choi
We present a careful analysis of experts' evaluation, which focuses on new aspects such as the comprehensiveness of the answer.
no code implementations • 24 May 2023 • Shufan Wang, Yixiao Song, Andrew Drozdov, Aparna Garimella, Varun Manjunatha, Mohit Iyyer
Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to a baseline Transformer LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation.
1 code implementation • 23 May 2023 • Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer
PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance.
4 code implementations • 23 May 2023 • Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi
Evaluating the factuality of long-form text generated by large language models (LMs) is non-trivial because (1) generations often contain a mixture of supported and unsupported pieces of information, making binary judgments of quality inadequate, and (2) human evaluation is time-consuming and costly.
1 code implementation • 6 Apr 2023 • Marzena Karpinska, Mohit Iyyer
Large language models (LLMs) are competitive with the state of the art on a wide range of sentence-level translation datasets.
1 code implementation • NeurIPS 2023 • Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, Mohit Iyyer
To increase the robustness of AI-generated text detection to paraphrase attacks, we introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider.
1 code implementation • 8 Mar 2023 • Ali Naseh, Kalpesh Krishna, Mohit Iyyer, Amir Houmansadr
A key component of generating text from modern language models (LM) is the selection and tuning of decoding algorithms.
no code implementations • 22 Feb 2023 • Simeng Sun, Yang Liu, Dan Iter, Chenguang Zhu, Mohit Iyyer
This motivates the use of parameter-efficient adaptation methods such as prompt tuning (PT), which adds a small number of tunable embeddings to an otherwise frozen model, and in-context learning (ICL), in which demonstrations of the task are provided to the model in natural language without any additional training.
1 code implementation • 30 Jan 2023 • Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, Kyle Lo
Motivated by our survey, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores?
1 code implementation • 28 Oct 2022 • Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs.
Ranked #9 on
Language Modelling
on WikiText-103
1 code implementation • 25 Oct 2022 • Marzena Karpinska, Nishant Raj, Katherine Thai, Yixiao Song, Ankita Gupta, Mohit Iyyer
While machine translation evaluation metrics based on string overlap (e. g., BLEU) have their limitations, their computations are transparent: the BLEU score assigned to a particular candidate translation can be traced back to the presence or absence of certain words.
1 code implementation • 25 Oct 2022 • Katherine Thai, Marzena Karpinska, Kalpesh Krishna, Bill Ray, Moira Inghilleri, John Wieting, Mohit Iyyer
Using Par3, we discover that expert literary translators prefer reference human translations over machine-translated paragraphs at a rate of 84%, while state-of-the-art automatic MT metrics do not correlate with those preferences.
1 code implementation • 21 Oct 2022 • Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Mohit Iyyer
To understand what kinds of linguistic knowledge are encoded by pretrained Chinese language models (LMs), we introduce the benchmark of Sino LINGuistics (SLING), which consists of 38K minimal sentence pairs in Mandarin Chinese grouped into 9 high-level linguistic phenomena.
1 code implementation • 13 Oct 2022 • Ankita Gupta, Marzena Karpinska, Wenlong Zhao, Kalpesh Krishna, Jack Merullo, Luke Yeh, Mohit Iyyer, Brendan O'Connor
Large-scale, high-quality corpora are critical for advancing research in coreference resolution.
1 code implementation • 25 May 2022 • Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant
In this paper, we explore the challenging problem of performing a generative task in a target language when labeled data is only available in English, using summarization as a case study.
1 code implementation • 19 May 2022 • Kalpesh Krishna, Yapei Chang, John Wieting, Mohit Iyyer
Given an input sequence (or prefix), modern language models often assign high probabilities to output sequences that are repetitive, incoherent, or irrelevant to the prefix; as such, model-generated text also contains such artifacts.
no code implementations • NAACL 2022 • Shufan Wang, Fangyuan Xu, Laure Thompson, Eunsol Choi, Mohit Iyyer
We show that not only do state-of-the-art LFQA models struggle to generate relevant examples, but also that standard evaluation metrics such as ROUGE are insufficient to judge exemplification quality.
2 code implementations • NAACL 2022 • Simeng Sun, Katherine Thai, Mohit Iyyer
While numerous architectures for long-range language models (LRLMs) have recently been proposed, a meaningful evaluation of their discourse-level language understanding capabilities has not yet followed.
1 code implementation • ACL 2022 • Katherine Thai, Yapei Chang, Kalpesh Krishna, Mohit Iyyer
Humanities scholars commonly provide evidence for claims that they make about a work of literature (e. g., a novel) in the form of quotations from the work.
no code implementations • EMNLP 2021 • Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, Mohit Iyyer
Language models are generally trained on short, truncated input sequences, which limits their ability to use discourse-level information present in long-range context to improve their predictions.
no code implementations • EMNLP 2021 • Marzena Karpinska, Nader Akoury, Mohit Iyyer
Recent text generation research has increasingly focused on open-ended domains such as story and poetry generation.
1 code implementation • EMNLP 2021 • Tu Vu, Minh-Thang Luong, Quoc V. Le, Grady Simon, Mohit Iyyer
Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language models do not perform as well in few-shot settings where only a handful of training examples are available.
Ranked #1 on
Few-Shot NLI
on SNLI (8 training examples per class)
2 code implementations • EMNLP 2021 • Shufan Wang, Laure Thompson, Mohit Iyyer
Phrase representations derived from BERT often do not exhibit complex phrasal compositionality, as the model relies instead on lexical similarity to determine semantic relatedness.
1 code implementation • EMNLP 2021 • Zhiyang Xu, Andrew Drozdov, Jay Yoon Lee, Tim O'Gorman, Subendhu Rongali, Dylan Finkbeiner, Shilpa Suresh, Mohit Iyyer, Andrew McCallum
For over thirty years, researchers have developed and analyzed methods for latent tree induction as an approach for unsupervised syntactic parsing.
2 code implementations • NAACL 2021 • Hiroshi Iida, Dung Thai, Varun Manjunatha, Mohit Iyyer
Existing work on tabular representation learning jointly models tables and associated text using self-supervised objective functions derived from pretrained language models such as BERT.
Ranked #1 on
Column Type Annotation
on VizNet-Sato-Full
(Weighted-F1 metric)
1 code implementation • 14 Apr 2021 • Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer
While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored.
1 code implementation • NAACL 2021 • Simeng Sun, Mohit Iyyer
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements.
Ranked #59 on
Language Modelling
on WikiText-103
1 code implementation • EACL 2021 • Haw-Shiuan Chang, Jiaming Yuan, Mohit Iyyer, Andrew McCallum
Our framework consists of two components: (1) a method that produces a set of candidate topics by predicting the centers of word clusters in the possible continuations, and (2) a text generation model whose output adheres to the chosen topics.
2 code implementations • NAACL 2021 • Kalpesh Krishna, Aurko Roy, Mohit Iyyer
The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer.
Ranked #3 on
Question Answering
on KILT: ELI5
1 code implementation • 3 Mar 2021 • Chen Qu, Liu Yang, Cen Chen, W. Bruce Croft, Kalpesh Krishna, Mohit Iyyer
Our method is more flexible as it can handle both span answers and freeform answers.
1 code implementation • EMNLP (NLP+CSS) 2020 • Dhruvil Gala, Mohammad Omar Khursheed, Hannah Lerner, Brendan O'Connor, Mohit Iyyer
Popular media reflects and reinforces societal biases through the use of tropes, which are narrative elements, such as archetypal characters and plot arcs, that occur frequently across media.
1 code implementation • EMNLP 2020 • Kalpesh Krishna, John Wieting, Mohit Iyyer
Modern NLP defines the task of style transfer as modifying the style of a given sentence without appreciably changing its semantics, which implies that the outputs of style transfer systems should be paraphrases of their inputs.
1 code implementation • EMNLP 2020 • Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, Mohit Iyyer
Systems for story generation are asked to produce plausible and enjoyable stories given an input context.
1 code implementation • ACL 2021 • Sumanta Bhattacharyya, Amirmohammad Rooshenas, Subhajit Naskar, Simeng Sun, Mohit Iyyer, Andrew McCallum
To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i. e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR).
1 code implementation • 22 May 2020 • Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W. Bruce Croft, Mohit Iyyer
We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.
1 code implementation • ACL 2020 • Weiqiu You, Simeng Sun, Mohit Iyyer
Recent work has questioned the importance of the Transformer's multi-headed attention for achieving high translation quality.
1 code implementation • EMNLP 2020 • Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, Mohit Iyyer
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task, and we validate their effectiveness in experiments controlled for source and target data size.
no code implementations • LREC 2020 • Jordan Boyd-Graber, Fenfei Guo, Leah Findlater, Mohit Iyyer
Text representations are critical for modern natural language processing.
no code implementations • IJCNLP 2019 • Andrew Drozdov, Patrick Verga, Yi-Pei Chen, Mohit Iyyer, Andrew McCallum
Understanding text often requires identifying meaningful constituent spans such as noun phrases and verb phrases.
1 code implementation • ICLR 2020 • Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, Mohit Iyyer
We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model.
1 code implementation • IJCNLP 2019 • Jack Merullo, Luke Yeh, Abram Handler, Alvin Grissom II, Brendan O'Connor, Mohit Iyyer
Sports broadcasters inject drama into play-by-play commentary by building team and player narratives through subjective analyses and anecdotes.
2 code implementations • 26 Aug 2019 • Chen Qu, Liu Yang, Minghui Qiu, Yongfeng Zhang, Cen Chen, W. Bruce Croft, Mohit Iyyer
First, we propose a positional history answer embedding method to encode conversation history with position information using BERT in a natural way.
1 code implementation • ACL 2019 • Tu Vu, Mohit Iyyer
While paragraph embedding models are remarkably effective for downstream classification tasks, what they learn and encode into a single vector remains opaque.
2 code implementations • ACL 2019 • Kalpesh Krishna, Mohit Iyyer
The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002).
1 code implementation • ACL 2019 • Nader Akoury, Kalpesh Krishna, Mohit Iyyer
Standard decoders for neural machine translation autoregressively generate a single target token per time step, which slows inference especially for long outputs.
1 code implementation • NAACL 2019 • Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum
We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.
1 code implementation • 14 May 2019 • Chen Qu, Liu Yang, Minghui Qiu, W. Bruce Croft, Yongfeng Zhang, Mohit Iyyer
One of the major challenges to multi-turn conversational search is to model the conversation history to answer the current question.
no code implementations • NAACL 2019 • Shufan Wang, Mohit Iyyer
Literary critics often attempt to uncover meaning in a single work of literature through careful reading and analysis.
no code implementations • 9 Apr 2019 • Pedro Rodriguez, Shi Feng, Mohit Iyyer, He He, Jordan Boyd-Graber
Throughout this paper, we show that collaborations with the vibrant trivia community have contributed to the quality of our dataset, spawned new research directions, and doubled as an exciting way to engage the public with research in machine learning and natural language processing.
3 code implementations • 3 Apr 2019 • Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum
We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.
no code implementations • EMNLP 2018 • Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer
We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).
no code implementations • 27 Sep 2018 • Fenfei Guo, Mohit Iyyer, Leah Findlater, Jordan Boyd-Graber
We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources.
1 code implementation • EMNLP 2018 • Kalpesh Krishna, Preethi Jyothi, Mohit Iyyer
We analyze the performance of different sentiment classification models on syntactically complex inputs like A-but-B sentences.
no code implementations • 21 Aug 2018 • Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer
We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).
no code implementations • 22 Apr 2018 • Fenfei Guo, Mohit Iyyer, Jordan Boyd-Graber
Methods for learning word sense embeddings represent a single word with multiple sense-specific vectors.
no code implementations • EMNLP 2018 • Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber
In existing interpretation methods for NLP, a word's importance is determined by either input perturbation---measuring the decrease in model confidence when that word is removed---or by the gradient with respect to that word.
2 code implementations • NAACL 2018 • Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer
We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples.
1 code implementation • NAACL 2018 • Varun Manjunatha, Mohit Iyyer, Jordan Boyd-Graber, Larry Davis
Automatic colorization is the process of adding color to greyscale images.
46 code implementations • NAACL 2018 • Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
Ranked #3 on
Only Connect Walls Dataset Task 1 (Grouping)
on OCW
(Wasserstein Distance (WD) metric, using extra
training data)
Citation Intent Classification
Conversational Response Selection
+9
no code implementations • ACL 2017 • Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang
Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.
3 code implementations • CVPR 2017 • Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daumé III, Larry Davis
While computers can now describe what is explicitly depicted in natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels.
no code implementations • 4 Nov 2016 • Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang
Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.
11 code implementations • 24 Jun 2015 • Ankit Kumar, Ozan .Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, Richard Socher
Most tasks in natural language processing can be cast into question answering (QA) problems over language input.
Ranked #67 on
Sentiment Analysis
on SST-2 Binary classification