Search Results for author: Javier Turek

Found 7 papers, 4 papers with code

Humans and language models diverge when predicting repeating text

1 code implementation10 Oct 2023 Aditya R. Vaidya, Javier Turek, Alexander G. Huth

In contrast with these findings, we present a scenario in which the performance of humans and LMs diverges.

In-Context Learning

Large Language Models Based Automatic Synthesis of Software Specifications

no code implementations18 Apr 2023 Shantanu Mandal, Adhrik Chethan, Vahid Janfaza, S M Farabi Mahmud, Todd A Anderson, Javier Turek, Jesmin Jahan Tithi, Abdullah Muzahid

As software systems grow in complexity and scale, the number of configurations and associated specifications required to ensure the correct operation can become large and prohibitively difficult to manipulate manually.

Language Modelling Large Language Model

Synthesizing Programs with Continuous Optimization

no code implementations2 Nov 2022 Shantanu Mandal, Todd A. Anderson, Javier Turek, Justin Gottschlich, Abdullah Muzahid

In this paper, we present a novel formulation of program synthesis as a continuous optimization problem and use a state-of-the-art evolutionary approach, known as Covariance Matrix Adaptation Evolution Strategy to solve the problem.

Program Synthesis

Selecting Informative Contexts Improves Language Model Fine-tuning

no code implementations ACL 2021 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses

1 code implementation NeurIPS 2021 Richard Antonello, Javier Turek, Vy Vo, Alexander Huth

We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.

Transfer Learning Translation +1

Selecting Informative Contexts Improves Language Model Finetuning

1 code implementation1 May 2020 Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth

Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.