The main challenge associated with the quantization algorithm is maintaining accuracy at low bit-widths.
We propose a single model that addresses both temporal ordering, sorting given events into the order they occurred, and event infilling, predicting new events which fit into an existing temporally-ordered sequence.
We present ReadOnce Transformers, an approach to convert a transformer-based model into one that can build an information-capturing, task-independent, and compressed representation of text.
A principal barrier to training temporal relation extraction models in new domains is the lack of varied, high quality examples and the challenge of collecting more.
Current methods in open-domain question answering (QA) usually employ a pipeline of first retrieving relevant documents, then applying strong reading comprehension (RC) models to that retrieved text.
Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.
Ranked #3 on Question Answering on WikiHop
In this paper, we present DATC Robust Design Flow (RDF) from logic synthesis to detailed routing.
Other Computer Science