Search Results for author: Elias Stengel-Eskin

Found 23 papers, 16 papers with code

Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training

no code implementations4 Mar 2024 David Wan, Jaemin Cho, Elias Stengel-Eskin, Mohit Bansal

Highlighting particularly relevant regions of an image can improve the performance of vision-language models (VLMs) on various vision-language (VL) tasks by guiding the model to attend more closely to these regions of interest.

Math Phrase Grounding +2

Soft Self-Consistency Improves Language Model Agents

1 code implementation20 Feb 2024 Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal

Current "sample and select" methods such as self-consistency (SC) rely on majority voting to score answers.

Language Modelling valid

GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations

1 code implementation19 Feb 2024 Jinhao Duan, Renming Zhang, James Diffenderfer, Bhavya Kailkhura, Lichao Sun, Elias Stengel-Eskin, Mohit Bansal, Tianlong Chen, Kaidi Xu

As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial.

Card Games Logical Reasoning

MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models

1 code implementation2 Feb 2024 Justin Chih-Yao Chen, Swarnadeep Saha, Elias Stengel-Eskin, Mohit Bansal

Experiments on seven widely-used commonsense and math reasoning benchmarks show that MAGDi improves the reasoning capabilities of smaller models, outperforming several methods that distill from a single teacher and multiple teachers.

Language Modelling Large Language Model +1

ReGAL: Refactoring Programs to Discover Generalizable Abstractions

1 code implementation29 Jan 2024 Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal

While large language models (LLMs) are increasingly being used for program synthesis, they lack the global view needed to develop useful abstractions; they generally predict programs one at a time, often repeating the same functionality.

Date Understanding Program Synthesis

Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models

1 code implementation9 Oct 2023 Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal

An increasing number of vision-language tasks can be handled with little to no training, i. e., in a zero and few-shot manner, by marrying large language models (LLMs) to vision encoders, resulting in large vision-language models (LVLMs).

Language Modelling Question Answering +2

Zero and Few-shot Semantic Parsing with Ambiguous Inputs

1 code implementation1 Jun 2023 Elias Stengel-Eskin, Kyle Rawlins, Benjamin Van Durme

We attempt to address this shortcoming by introducing AmP, a framework, dataset, and challenge for translating ambiguous natural language to formal representations like logic and code.

Semantic Parsing

Did You Mean...? Confidence-based Trade-offs in Semantic Parsing

no code implementations29 Mar 2023 Elias Stengel-Eskin, Benjamin Van Durme

We then examine how confidence scores can help optimize the trade-off between usability and safety.

Semantic Parsing

Calibrated Interpretation: Confidence Estimation in Semantic Parsing

2 code implementations14 Nov 2022 Elias Stengel-Eskin, Benjamin Van Durme

Sequence generation models are increasingly being used to translate natural language into programs, i. e. to perform executable semantic parsing.

Semantic Parsing

When More Data Hurts: A Troubling Quirk in Developing Broad-Coverage Natural Language Understanding Systems

1 code implementation24 May 2022 Elias Stengel-Eskin, Emmanouil Antonios Platanios, Adam Pauls, Sam Thomson, Hao Fang, Benjamin Van Durme, Jason Eisner, Yu Su

Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows.

Intent Recognition Natural Language Understanding +1

The Curious Case of Control

1 code implementation24 May 2022 Elias Stengel-Eskin, Benjamin Van Durme

Given the advanced fluency of large generative language models, we ask whether model outputs are consistent with these heuristics, and to what degree different models are consistent with each other.

Visual Commonsense in Pretrained Unimodal and Multimodal Models

1 code implementation NAACL 2022 Chenyu Zhang, Benjamin Van Durme, Zhuowan Li, Elias Stengel-Eskin

Our commonsense knowledge about objects includes their typical visual attributes; we know that bananas are typically yellow or green, and not purple.

Attribute Visual Commonsense Tests +1

Guiding Multi-Step Rearrangement Tasks with Natural Language Instructions

2 code implementations Conference On Robot Learning (CoRL) 2021 Elias Stengel-Eskin, Andrew Hundt, Zhuohong He, Aditya Murali, Nakul Gopalan, Matthew Gombolay, Gregory Hager

Our model completes block manipulation tasks with synthetic commands 530 more often than a UNet-based baseline, and learns to localize actions correctly while creating a mapping of symbols to perceptual input that supports compositional reasoning.

Instruction Following

Joint Universal Syntactic and Semantic Parsing

1 code implementation12 Apr 2021 Elias Stengel-Eskin, Kenton Murray, Sheng Zhang, Aaron Steven White, Benjamin Van Durme

While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other.

Semantic Parsing

Iterative Paraphrastic Augmentation with Discriminative Span Alignment

no code implementations1 Jul 2020 Ryan Culkin, J. Edward Hu, Elias Stengel-Eskin, Guanghui Qin, Benjamin Van Durme

We introduce a novel paraphrastic augmentation strategy based on sentence-level lexically constrained paraphrasing and discriminative span alignment.

Sentence

Universal Decompositional Semantic Parsing

no code implementations ACL 2020 Elias Stengel-Eskin, Aaron Steven White, Sheng Zhang, Benjamin Van Durme

We introduce a transductive model for parsing into Universal Decompositional Semantics (UDS) representations, which jointly learns to map natural language utterances into UDS graph structures and annotate the graph with decompositional semantic attribute scores.

Attribute Semantic Parsing

A Discriminative Neural Model for Cross-Lingual Word Alignment

no code implementations IJCNLP 2019 Elias Stengel-Eskin, Tzu-Ray Su, Matt Post, Benjamin Van Durme

We introduce a novel discriminative word alignment model, which we integrate into a Transformer-based machine translation model.

Machine Translation NER +2

Cannot find the paper you are looking for? You can Submit a new open access paper.