1 code implementation • Findings (ACL) 2021 • Syrine Krichene, Thomas Müller, Julian Martin Eisenschlos
To improve efficiency while maintaining a high accuracy, we propose a new architecture, DoT, a double transformer model, that decomposes the problem into two sub-tasks: A shallow pruning transformer that selects the top-K tokens, followed by a deep task-specific transformer that takes as input those K tokens.
no code implementations • SEMEVAL 2021 • Thomas Müller, Julian Martin Eisenschlos, Syrine Krichene
We adopt the binary TAPAS model of Eisenschlos et al. (2020) to this task.
1 code implementation • NAACL 2021 • Jonathan Herzig, Thomas Müller, Syrine Krichene, Julian Martin Eisenschlos
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Julian Martin Eisenschlos, Syrine Krichene, Thomas Müller
To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.
Ranked #4 on
Table-based Fact Verification
on TabFact
no code implementations • 21 Jun 2019 • Syrine Krichene, Mike Gartrell, Clement Calauzenes
For example, applying constraints a posteriori can result in incomplete recommendations or low-quality results for the tail of the distribution (i. e., less popular items).
1 code implementation • NeurIPS 2019 • Mike Gartrell, Victor-Emmanuel Brunel, Elvis Dohmatob, Syrine Krichene
Our method imposes a particular decomposition of the nonsymmetric kernel that enables such tractable learning algorithms, which we analyze both theoretically and experimentally.