no code implementations • 21 Feb 2024 • Jianqiang Shen, Yuchin Juan, Shaobo Zhang, Ping Liu, Wen Pu, Sriram Vasudevan, Qingquan Song, Fedor Borisyuk, Kay Qianqi Shen, Haichao Wei, Yunxiang Ren, Yeou S. Chiou, Sicong Kuang, Yuan Yin, Ben Zheng, Muchen Wu, Shaghayegh Gharghabi, Xiaoqing Wang, Huichao Xue, Qi Guo, Daniel Hewlett, Luke Simon, Liangjie Hong, Wenjing Zhang
Web-scale search systems typically tackle the scalability challenge with a two-step paradigm: retrieval and ranking.
no code implementations • 20 Feb 2024 • Ping Liu, Haichao Wei, Xiaochen Hou, Jianqiang Shen, Shihai He, Kay Qianqi Shen, Zhujun Chen, Fedor Borisyuk, Daniel Hewlett, Liang Wu, Srikant Veeraraghavan, Alex Tsun, Chengming Jiang, Wenjing Zhang
This methodology decouples the training of the GNN model from that of existing Deep Neural Nets (DNN) models, eliminating the need for frequent GNN retraining while maintaining up-to-date graph signals in near realtime, allowing for the effective integration of GNN insights through transfer learning.
no code implementations • EMNLP 2017 • Daniel Hewlett, Llion Jones, Alex Lacoste, re, Izzeddin Gur
We also evaluate the model in a semi-supervised setting by downsampling the WikiReading training set to create increasingly smaller amounts of supervision, while leaving the full unlabeled document corpus to train a sequence autoencoder on document windows.
no code implementations • ACL 2017 • Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alex Lacoste, re, Jonathan Berant
We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models.
no code implementations • 6 Nov 2016 • Eunsol Choi, Daniel Hewlett, Alexandre Lacoste, Illia Polosukhin, Jakob Uszkoreit, Jonathan Berant
We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models.
2 code implementations • ACL 2016 • Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, David Berthelot
The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs).