Search Results for author: Jacob Devlin

Found 29 papers, 11 papers with code

Multi-Vector Attention Models for Deep Re-ranking

no code implementations EMNLP 2021 Giulio Zhou, Jacob Devlin

Large-scale document retrieval systems often utilize two styles of neural network models which live at two different ends of the joint computation vs. accuracy spectrum.

Passage Retrieval Re-Ranking +1

QueryForm: A Simple Zero-shot Form Entity Query Framework

no code implementations14 Nov 2022 Zifeng Wang, Zizhao Zhang, Jacob Devlin, Chen-Yu Lee, Guolong Su, Hao Zhang, Jennifer Dy, Vincent Perot, Tomas Pfister

Zero-shot transfer learning for document understanding is a crucial yet under-investigated scenario to help reduce the high cost involved in annotating document entities.

Transfer Learning

Efficiently Scaling Transformer Inference

no code implementations9 Nov 2022 Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Anselm Levskaya, Jonathan Heek, Kefan Xiao, Shivani Agrawal, Jeff Dean

We study the problem of efficient generative inference for Transformer models, in one of its most challenging settings: large deep models, with tight latency targets and long sequence lengths.

Quantization

Zero-Shot Entity Linking by Reading Entity Descriptions

3 code implementations ACL 2019 Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee

First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.

Entity Linking Reading Comprehension

Synthetic QA Corpora Generation with Roundtrip Consistency

4 code implementations ACL 2019 Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, Michael Collins

We introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency.

Question Answering Question Generation +2

Language Model Pre-training for Hierarchical Document Representations

no code implementations ICLR 2019 Ming-Wei Chang, Kristina Toutanova, Kenton Lee, Jacob Devlin

Hierarchical neural architectures are often used to capture long-distance dependencies and have been applied to many document-level tasks such as summarization, document segmentation, and sentiment analysis.

Document Summarization Extractive Document Summarization +4

Universal Neural Machine Translation for Extremely Low Resource Languages

no code implementations NAACL 2018 Jiatao Gu, Hany Hassan, Jacob Devlin, Victor O. K. Li

Our proposed approach utilizes a transfer-learning approach to share lexical and sentence level representations across multiple source languages into one target language.

Machine Translation Transfer Learning +1

Semantic Code Repair using Neuro-Symbolic Transformation Networks

no code implementations ICLR 2018 Jacob Devlin, Jonathan Uesato, Rishabh Singh, Pushmeet Kohli

We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code.

Code Repair

Neural Program Meta-Induction

no code implementations NeurIPS 2017 Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet Kohli

In our first proposal, portfolio adaptation, a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning.

Program induction Transfer Learning

Sharp Models on Dull Hardware: Fast and Accurate Neural Machine Translation Decoding on the CPU

1 code implementation EMNLP 2017 Jacob Devlin

By combining these techniques, our best system achieves a very competitive accuracy of 38. 3 BLEU on WMT English-French NewsTest2014, while decoding at 100 words/sec on single-threaded CPU.

Machine Translation NMT +1

RobustFill: Neural Program Learning under Noisy I/O

3 code implementations ICML 2017 Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli

Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation.

Program induction Program Synthesis

Generating Natural Questions About an Image

2 code implementations ACL 2016 Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, Lucy Vanderwende

There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images.

Image Captioning Natural Questions +3

Detecting Interrogative Utterances with Recurrent Neural Networks

no code implementations3 Nov 2015 Junyoung Chung, Jacob Devlin, Hany Hassan Awadalla

In this paper, we explore different neural network architectures that can predict if a speaker of a given utterance is asking a question or making a statement.

General Classification

Statistical Machine Translation Features with Multitask Tensor Networks

no code implementations IJCNLP 2015 Hendra Setiawan, Zhongqiang Huang, Jacob Devlin, Thomas Lamar, Rabih Zbib, Richard Schwartz, John Makhoul

We present a three-pronged approach to improving Statistical Machine Translation (SMT), building on recent success in the application of neural networks to SMT.

Machine Translation Tensor Networks +1

Cannot find the paper you are looking for? You can Submit a new open access paper.