Search Results for author: Andrew Drozdov

Found 14 papers, 8 papers with code

Inducing and Using Alignments for Transition-based AMR Parsing

1 code implementation NAACL 2022 Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo

These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints.

AMR Parsing

Do latent tree learning models identify meaningful structure in sentences?

1 code implementation TACL 2018 Adina Williams, Andrew Drozdov, Samuel R. Bowman

Recent work on the problem of latent tree learning has made it possible to train neural networks that learn to both parse a sentence and use the resulting parse to interpret the sentence, all without exposure to ground-truth parse trees at training time.

Sentence Sentence Classification

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders

3 code implementations3 Apr 2019 Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum

We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Constituency Parsing Sentence

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders

1 code implementation NAACL 2019 Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum

We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Constituency Grammar Induction Sentence

Emergent Communication in a Multi-Modal, Multi-Step Referential Game

1 code implementation ICLR 2018 Katrina Evtimova, Andrew Drozdov, Douwe Kiela, Kyunghyun Cho

Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration.

You can't pick your neighbors, or can you? When and how to rely on retrieval in the $k$NN-LM

1 code implementation28 Oct 2022 Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer

Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs.

Language Modelling Retrieval +2

The impact of preprint servers in the formation of novel ideas

1 code implementation EMNLP (sdp) 2020 Swarup Satish, Zonghai Yao, Andrew Drozdov, Boris Veytsman

We study whether novel ideas in biomedical literature appear first in preprints or traditional journals.

Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders

no code implementations EMNLP 2020 Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O{'}Gorman, Mohit Iyyer, Andrew McCallum

The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*.

Constituency Grammar Induction Sentence

KNN-LM Does Not Improve Open-ended Text Generation

no code implementations24 May 2023 Shufan Wang, Yixiao Song, Andrew Drozdov, Aparna Garimella, Varun Manjunatha, Mohit Iyyer

Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to a baseline Transformer LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation.

Retrieval Text Generation

PaRaDe: Passage Ranking using Demonstrations with Large Language Models

no code implementations22 Oct 2023 Andrew Drozdov, Honglei Zhuang, Zhuyun Dai, Zhen Qin, Razieh Rahimi, Xuanhui Wang, Dana Alon, Mohit Iyyer, Andrew McCallum, Donald Metzler, Kai Hui

Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance.

Passage Ranking Passage Re-Ranking +6

Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation

no code implementations15 Nov 2023 Jiachen Zhao, Wenlong Zhao, Andrew Drozdov, Benjamin Rozonoyer, Md Arafat Sultan, Jay-Yoon Lee, Mohit Iyyer, Andrew McCallum

In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks.

Constituency Parsing Knowledge Distillation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.