1 code implementation • EMNLP (sdp) 2020 • Swarup Satish, Zonghai Yao, Andrew Drozdov, Boris Veytsman
We study whether novel ideas in biomedical literature appear first in preprints or traditional journals.
no code implementations • EMNLP 2020 • Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O{'}Gorman, Mohit Iyyer, Andrew McCallum
The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*.
1 code implementation • 28 Oct 2022 • Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs.
no code implementations • 29 Sep 2022 • Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou
Humans can reason compositionally when presented with new tasks.
1 code implementation • NAACL 2022 • Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo
These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints.
1 code implementation • EMNLP 2021 • Zhiyang Xu, Andrew Drozdov, Jay Yoon Lee, Tim O'Gorman, Subendhu Rongali, Dylan Finkbeiner, Shilpa Suresh, Mohit Iyyer, Andrew McCallum
For over thirty years, researchers have developed and analyzed methods for latent tree induction as an approach for unsupervised syntactic parsing.
no code implementations • IJCNLP 2019 • Andrew Drozdov, Patrick Verga, Yi-Pei Chen, Mohit Iyyer, Andrew McCallum
Understanding text often requires identifying meaningful constituent spans such as noun phrases and verb phrases.
1 code implementation • NAACL 2019 • Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum
We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.
Ranked #5 on
Constituency Grammar Induction
on PTB
3 code implementations • 3 Apr 2019 • Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum
We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.
1 code implementation • TACL 2018 • Adina Williams, Andrew Drozdov, Samuel R. Bowman
Recent work on the problem of latent tree learning has made it possible to train neural networks that learn to both parse a sentence and use the resulting parse to interpret the sentence, all without exposure to ground-truth parse trees at training time.
1 code implementation • ICLR 2018 • Katrina Evtimova, Andrew Drozdov, Douwe Kiela, Kyunghyun Cho
Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration.