no code implementations • 10 May 2022 • QIUJING LU, Weiqiao Han, Jeffrey Ling, Minfa Wang, Haoyu Chen, Balakrishnan Varadarajan, Paul Covington
Predicting future trajectories of road agents is a critical task for autonomous driving.
no code implementations • ICLR 2022 • Jiquan Ngiam, Vijay Vasudevan, Benjamin Caine, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David J Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens
In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents.
4 code implementations • 15 Jun 2021 • Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens
In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents.
no code implementations • 11 Jan 2020 • Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Févry, David Weiss, Tom Kwiatkowski
Language modeling tasks, in which words, or word-pieces, are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.
Ranked #1 on Entity Linking on CoNLL-Aida
1 code implementation • IJCNLP 2019 • Chris Alberti, Jeffrey Ling, Michael Collins, David Reitter
To advance models of multimodal context, we introduce a simple yet powerful neural architecture for data that combines vision and natural language.
13 code implementations • ACL 2019 • Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, Tom Kwiatkowski
General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction.
Ranked #9 on Relation Classification on TACRED
no code implementations • ICLR Workshop LLD 2019 • Jeffrey Ling, Nicholas FitzGerald, Livio Baldini Soares, David Weiss, Tom Kwiatkowski
Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.
no code implementations • WS 2017 • Jeffrey Ling, Alex Rush, er
Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization.
Ranked #25 on Document Summarization on CNN / Daily Mail
14 code implementations • ICML 2017 • Yuntian Deng, Anssi Kanervisto, Jeffrey Ling, Alexander M. Rush
We present a neural encoder-decoder model to convert images into presentational markup based on a scalable coarse-to-fine attention mechanism.