1 code implementation • 20 Dec 2022 • Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun
Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24. 0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
Chart Question Answering Factual Inconsistency Detection in Chart Captioning +3
1 code implementation • 19 Dec 2022 • Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos
Visual language data such as plots, charts, and infographics are ubiquitous in the human world.
Ranked #1 on Visual Question Answering on PlotQA-D2
no code implementations • 17 Oct 2022 • Ewa Andrejczuk, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Yasemin Altun
Encoder-only transformer models have been successfully applied to different table understanding tasks, as in TAPAS (Herzig et al., 2020).
1 code implementation • Findings (ACL) 2021 • Syrine Krichene, Thomas Müller, Julian Martin Eisenschlos
To improve efficiency while maintaining a high accuracy, we propose a new architecture, DoT, a double transformer model, that decomposes the problem into two sub-tasks: A shallow pruning transformer that selects the top-K tokens, followed by a deep task-specific transformer that takes as input those K tokens.
no code implementations • SEMEVAL 2021 • Thomas Müller, Julian Martin Eisenschlos, Syrine Krichene
We adopt the binary TAPAS model of Eisenschlos et al. (2020) to this task.
1 code implementation • NAACL 2021 • Jonathan Herzig, Thomas Müller, Syrine Krichene, Julian Martin Eisenschlos
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Julian Martin Eisenschlos, Syrine Krichene, Thomas Müller
To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.
Ranked #9 on Table-based Fact Verification on TabFact
no code implementations • 21 Jun 2019 • Syrine Krichene, Mike Gartrell, Clement Calauzenes
For example, applying constraints a posteriori can result in incomplete recommendations or low-quality results for the tail of the distribution (i. e., less popular items).
1 code implementation • NeurIPS 2019 • Mike Gartrell, Victor-Emmanuel Brunel, Elvis Dohmatob, Syrine Krichene
Our method imposes a particular decomposition of the nonsymmetric kernel that enables such tractable learning algorithms, which we analyze both theoretically and experimentally.