Query auto completion (QAC) is the task of predicting a search engine user’s final query from their intermediate, incomplete query.
We propose a logistic Bradley-Terry probe which predicts word pair preferences of LLMs from the words' hidden vectors.
Large language models (LLMs) exhibit positional bias in how they use context, which especially complicates listwise ranking.
Our method also performs particularly well in few-shot settings where labeled data are too scarce for DNNs to achieve a satisfying accuracy.
In this paper, we explore training and deploying an ASR system in the label-scarce, compute-limited setting.
Large-scale diffusion neural networks represent a substantial milestone in text-to-image generation, but they remain poorly understood, lacking interpretability analyses.
There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc.
To fill this void in the literature, we study in this paper selective prediction for NLP, comparing different models and confidence estimators.
The slow speed of BERT has motivated much research on accelerating its inference, and the early exiting idea has been proposed to make trade-offs between model quality and efficiency.
We describe Howl, an open-source wake word detection toolkit with native support for open speech datasets, like Mozilla Common Voice and Google Speech Commands.
Ranked #4 on Keyword Spotting on Google Speech Commands
We present Covidex, a search engine that exploits the latest neural ranking models to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI.
Fine-tuned variants of BERT are able to achieve state-of-the-art accuracy on many natural language processing tasks, although at significant computational costs.
We present CovidQA, the beginnings of a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge.
We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality.
In this paper, we hypothesize that introducing an explicit, constrained pairwise word interaction mechanism to pretrained language models improves their effectiveness on semantic similarity tasks.
Semantic similarity modeling is central to many NLP problems such as natural language inference and question answering.
Knowledge distillation can effectively transfer knowledge from BERT, a deep language representation model, to traditional, shallow word embedding-based neural networks, helping them approach or exceed the quality of other heavyweight language representation models.
Neural network models for many NLP tasks have grown increasingly complex in recent years, making training and deployment more difficult.
Ranked #2 on Document Classification on IMDb-M
We present, to our knowledge, the first application of BERT to document classification.
Ranked #1 on Document Classification on Yelp-14
In the natural language processing literature, neural networks are becoming increasingly deeper and complex.
Ranked #56 on Sentiment Analysis on SST-2 Binary classification
Voice-enabled commercial products are ubiquitous, typically enabled by lightweight on-device keyword spotting (KWS) and full automatic speech recognition (ASR) in the cloud.
There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference.
Overall, our robust, cross-device implementation for keyword spotting realizes a new paradigm for serving neural network applications, and one of our slim models reduces latency by 66% with a minimal decrease in accuracy of 4% from 94% to 90%.
We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark.
We describe Honk, an open-source PyTorch reimplementation of convolutional neural networks for keyword spotting that are included as examples in TensorFlow.