Search Results for author: Andreas Opedal

Found 7 papers, 5 papers with code

Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners?

no code implementations31 Jan 2024 Andreas Opedal, Alessandro Stolfo, Haruki Shirakami, Ying Jiao, Ryan Cotterell, Bernhard Schölkopf, Abulhair Saparov, Mrinmaya Sachan

We find evidence that LLMs, with and without instruction-tuning, exhibit human-like biases in both the text-comprehension and the solution-planning steps of the solving process, but not during the final step which relies on the problem's arithmetic expressions (solution execution).

Reading Comprehension

An Exploration of Left-Corner Transformations

no code implementations27 Nov 2023 Andreas Opedal, Eleftheria Tsipidi, Tiago Pimentel, Ryan Cotterell, Tim Vieira

The left-corner transformation (Rosenkrantz and Lewis, 1970) is used to remove left recursion from context-free grammars, which is an important step towards making the grammar parsable top-down with simple techniques.

Efficient Semiring-Weighted Earley Parsing

1 code implementation6 Jul 2023 Andreas Opedal, Ran Zmigrod, Tim Vieira, Ryan Cotterell, Jason Eisner

This paper provides a reference description, in the form of a deduction system, of Earley's (1970) context-free parsing algorithm with various speed-ups.

Sentence

World Models for Math Story Problems

1 code implementation7 Jun 2023 Andreas Opedal, Niklas Stoehr, Abulhair Saparov, Mrinmaya Sachan

In this paper, we consolidate previous work on categorizing and representing math story problems and develop MathWorld, which is a graph-based semantic formalism specific for the domain of math story problems.

Math

On the Intersection of Context-Free and Regular Languages

1 code implementation14 Sep 2022 Clemente Pasti, Andreas Opedal, Tiago Pimentel, Tim Vieira, Jason Eisner, Ryan Cotterell

It shows, by a simple construction, that the intersection of a context-free language and a regular language is itself context-free.

Recovering Barabási-Albert Parameters of Graphs through Disentanglement

1 code implementation ICLR Workshop GTRL 2021 Cristina Guzman, Daphna Keidar, Tristan Meynier, Andreas Opedal, Niklas Stoehr

We first learn the generative BA parameters in a supervised fashion using a Graph Neural Network (GNN) and a Random Forest Regressor, by minimizing the squared loss between the true generative parameters and the latent variables.

Disentanglement Graph Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.