We find that entailment-based models out-perform supervised text classifiers based on roberta-XLM and that we can reach 80% of the accuracy of previous approaches using less than 50\% of the training data on average.
Recent advances in natural language processing (NLP) have led to strong text classification models for many tasks.
We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification.
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.
However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens.
Ranked #2 on Question Answering on HybridQA
Since pretraining neural networks to handle novel, dynamic scenes is a formidable generalization challenge, we do away with pretraining and instead achieve generalization via adaptation, i. e. we opt for training the radiance cache while rendering.
To improve efficiency while maintaining a high accuracy, we propose a new architecture, DoT, a double transformer model, that decomposes the problem into two sub-tasks: A shallow pruning transformer that selects the top-K tokens, followed by a deep task-specific transformer that takes as input those K tokens.
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.
Social insect colonies routinely face large vertebrate predators, against which they need to mount a collective defense.
To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.
Ranked #4 on Table-based Fact Verification on TabFact
In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms.
Ranked #1 on Semantic Parsing on SQA
Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics.
According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is both a necessary and a sufficient condition for the presence of genuine thought.
We present a novel approach to answering sequential questions based on structured objects such as knowledge bases or tables without using a logical form as an intermediate representation.
Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals.
To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source.