Search Results for author: Julian Martin Eisenschlos

Found 25 papers, 10 papers with code

TableRAG: Million-Token Table Understanding with Language Models

no code implementations7 Oct 2024 Si-An Chen, Lesly Miculicich, Julian Martin Eisenschlos, Zifeng Wang, Zilong Wang, Yanfei Chen, Yasuhisa Fujii, Hsuan-Tien Lin, Chen-Yu Lee, Tomas Pfister

Recent advancements in language models (LMs) have notably enhanced their ability to reason with tabular data, primarily through program-aided mechanisms that manipulate and analyze tables.

RAG Retrieval

Selectively Answering Visual Questions

no code implementations3 Jun 2024 Julian Martin Eisenschlos, Hernán Maina, Guido Ivetta, Luciana Benotti

We perform the first in-depth analysis of calibration methods and metrics for VQA with in-context learning LMMs.

Avg In-Context Learning +2

Faithful Chart Summarization with ChaTS-Pi

no code implementations29 May 2024 Syrine Krichene, Francesco Piccinno, Fangyu Liu, Julian Martin Eisenschlos

Chart-to-summary generation can help explore data, communicate insights, and help the visually impaired people.

Image to text Sentence

TANQ: An open domain dataset of table answered questions

no code implementations13 May 2024 Mubashara Akhtar, Chenxi Pang, Andreea Marzoca, Yasemin Altun, Julian Martin Eisenschlos

Language models, potentially augmented with tool usage such as retrieval are becoming the go-to means of answering questions.

Math Open-Domain Question Answering

Universal Self-Adaptive Prompting

no code implementations24 May 2023 Xingchen Wan, Ruoxi Sun, Hootan Nakhost, Hanjun Dai, Julian Martin Eisenschlos, Sercan O. Arik, Tomas Pfister

A hallmark of modern large language models (LLMs) is their impressive general zero-shot and few-shot abilities, often elicited through in-context learning (ICL) via prompting.

In-Context Learning Natural Language Understanding +2

Selectively Answering Ambiguous Questions

no code implementations24 May 2023 Jeremy R. Cole, Michael J. Q. Zhang, Daniel Gillick, Julian Martin Eisenschlos, Bhuwan Dhingra, Jacob Eisenstein

We investigate question answering from this perspective, focusing on answering a subset of questions with a high degree of accuracy, from a set of questions in which many are inherently ambiguous.

Question Answering

DIFFQG: Generating Questions to Summarize Factual Changes

no code implementations1 Mar 2023 Jeremy R. Cole, Palak Jain, Julian Martin Eisenschlos, Michael J. Q. Zhang, Eunsol Choi, Bhuwan Dhingra

We propose representing factual changes between paired documents as question-answer pairs, where the answer to the same question differs between two versions.

Change Detection Question Generation +1

DePlot: One-shot visual language reasoning by plot-to-table translation

1 code implementation20 Dec 2022 Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun

Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24. 0% improvement over finetuned SOTA on human-written queries from the task of chart QA.

Chart Question Answering Factual Inconsistency Detection in Chart Captioning +3

Table-To-Text generation and pre-training with TabT5

no code implementations17 Oct 2022 Ewa Andrejczuk, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Yasemin Altun

Encoder-only transformer models have been successfully applied to different table understanding tasks, as in TAPAS (Herzig et al., 2020).

Data-to-Text Generation Decoder +1

MiQA: A Benchmark for Inference on Metaphorical Questions

no code implementations14 Oct 2022 Iulia-Maria Comsa, Julian Martin Eisenschlos, Srini Narayanan

We propose a benchmark to assess the capability of large language models to reason with conventional metaphors.

WinoDict: Probing language models for in-context word acquisition

no code implementations25 Sep 2022 Julian Martin Eisenschlos, Jeremy R. Cole, Fangyu Liu, William W. Cohen

We introduce a new in-context learning paradigm to measure Large Language Models' (LLMs) ability to learn novel words during inference.

In-Context Learning Probing Language Models

MATE: Multi-view Attention for Table Transformer Efficiency

1 code implementation EMNLP 2021 Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, William W. Cohen

However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens.

Inductive Bias Question Answering

Time-Aware Language Models as Temporal Knowledge Bases

no code implementations29 Jun 2021 Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, William W. Cohen

We introduce a diagnostic dataset aimed at probing LMs for factual knowledge that changes over time and highlight problems with LMs at either end of the spectrum -- those trained on specific slices of temporal data, as well as those trained on a wide range of temporal data.

Memorization

DoT: An efficient Double Transformer for NLP tasks with tables

1 code implementation Findings (ACL) 2021 Syrine Krichene, Thomas Müller, Julian Martin Eisenschlos

To improve efficiency while maintaining a high accuracy, we propose a new architecture, DoT, a double transformer model, that decomposes the problem into two sub-tasks: A shallow pruning transformer that selects the top-K tokens, followed by a deep task-specific transformer that takes as input those K tokens.

Question Answering

Fool Me Twice: Entailment from Wikipedia Gamification

1 code implementation NAACL 2021 Julian Martin Eisenschlos, Bhuwan Dhingra, Jannis Bulian, Benjamin Börschinger, Jordan Boyd-Graber

We release FoolMeTwice (FM2 for short), a large dataset of challenging entailment pairs collected through a fun multi-player game.

Retrieval

Understanding tables with intermediate pre-training

1 code implementation Findings of the Association for Computational Linguistics 2020 Julian Martin Eisenschlos, Syrine Krichene, Thomas Müller

To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.

Binary Classification Data Augmentation +3

SoftSort: A Continuous Relaxation for the argsort Operator

1 code implementation29 Jun 2020 Sebastian Prillo, Julian Martin Eisenschlos

While sorting is an important procedure in computer science, the argsort operator - which takes as input a vector and returns its sorting permutation - has a discrete image and thus zero gradients almost everywhere.

Cannot find the paper you are looking for? You can Submit a new open access paper.