Search Results for author: Sean Welleck

Found 47 papers, 35 papers with code

Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning

no code implementations19 Dec 2024 Simon Frieder, Jonas Bayer, Katherine M. Collins, Julius Berner, Jacob Loader, András Juhász, Fabian Ruehle, Sean Welleck, Gabriel Poesia, Ryan-Rhys Griffiths, Adrian Weller, Anirudh Goyal, Thomas Lukasiewicz, Timothy Gowers

The suite of datasets commonly used to train and evaluate the mathematical capabilities of AI-based mathematical copilots (primarily large language models) exhibit several shortcomings.

Math

AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement

no code implementations9 Dec 2024 Pranjal Aggarwal, Bryan Parno, Sean Welleck

Automated code generation with large language models has gained significant traction, but there remains no guarantee on the correctness of generated code.

Code Generation HumanEval

Evaluating Language Models as Synthetic Data Generators

1 code implementation4 Dec 2024 Seungone Kim, Juyoung Suk, Xiang Yue, Vijay Viswanathan, Seongyun Lee, Yizhong Wang, Kiril Gashteovski, Carolin Lawrence, Sean Welleck, Graham Neubig

Given the increasing use of synthetic data in language model (LM) post-training, an LM's ability to generate high-quality data has become nearly as crucial as its ability to solve problems directly.

Language Modeling Language Modelling +1

ImProver: Agent-Based Automated Proof Optimization

2 code implementations7 Oct 2024 Riyaz Ahuja, Jeremy Avigad, Prasad Tetali, Sean Welleck

To this end, we study a new problem of automated proof optimization: rewriting a proof so that it is correct and optimizes for an arbitrary criterion, such as length or readability.

Language Modelling Large Language Model

miniCTX: Neural Theorem Proving with (Long-)Contexts

3 code implementations5 Aug 2024 Jiewen Hu, Thomas Zhu, Sean Welleck

We introduce miniCTX, which tests a model's ability to prove formal mathematical theorems that depend on new context that is not seen during training.

Automated Theorem Proving

Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models

no code implementations1 Aug 2024 Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, Yiming Yang

As a first step towards understanding and designing compute-optimal inference methods, we studied cost-performance trade-offs for inference strategies such as greedy search, majority voting, best-of-$n$, weighted voting, and two different tree search algorithms, using different model sizes and compute budgets.

Math

Lean-STaR: Learning to Interleave Thinking and Proving

no code implementations14 Jul 2024 Haohan Lin, Zhiqing Sun, Yiming Yang, Sean Welleck

We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof, thereby boosting the model's theorem-proving capabilities.

Automated Theorem Proving Language Modeling +1

From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models

no code implementations24 Jun 2024 Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, Zaid Harchaoui

One of the most striking findings in modern research on large language models (LLMs) is that scaling up compute during training leads to better results.

Survey

miniCodeProps: a Minimal Benchmark for Proving Code Properties

no code implementations16 Jun 2024 Evan Lohn, Sean Welleck

We publicly release miniCodeProps as a benchmark for furthering automated theorem proving in the context of formally verified code.

AI Agent Automated Theorem Proving

Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision

1 code implementation14 Mar 2024 Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, Chuang Gan

This paper answers this question in the context of tackling hard reasoning tasks (e. g., level 4-5 MATH problems) via learning from human annotations on easier tasks (e. g., level 1-3 MATH problems), which we term as easy-to-hard generalization.

Math Reinforcement Learning (RL) +1

STEER: Unified Style Transfer with Expert Reinforcement

1 code implementation13 Nov 2023 Skyler Hallinan, Faeze Brahman, Ximing Lu, JaeHun Jung, Sean Welleck, Yejin Choi

We propose STEER: Unified Style Transfer with Expert Reinforcement, a unified frame-work developed to overcome the challenge of limited parallel data for style transfer.

Style Transfer Text Style Transfer

LLMSTEP: LLM proofstep suggestions in Lean

1 code implementation27 Oct 2023 Sean Welleck, Rahul Saha

LLMSTEP is a Lean 4 tactic that sends a user's proof state to a server hosting a language model.

Language Modeling Language Modelling

Faith and Fate: Limits of Transformers on Compositionality

1 code implementation NeurIPS 2023 Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures.

Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning

1 code implementation24 May 2023 Ximing Lu, Faeze Brahman, Peter West, Jaehun Jang, Khyathi Chandu, Abhilasha Ravichander, Lianhui Qin, Prithviraj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian Fisher, Bill Yuchen Lin, Skyler Hallinan, Xiang Ren, Sean Welleck, Yejin Choi

While extreme-scale language models have demonstrated exceptional performance on a variety of language tasks, the degree of control over these language models through pure prompting can often be limited.

Language Modeling Language Modelling +2

MAUVE Scores for Generative Models: Theory and Practice

1 code implementation30 Dec 2022 Krishna Pillutla, Lang Liu, John Thickstun, Sean Welleck, Swabha Swayamdipta, Rowan Zellers, Sewoong Oh, Yejin Choi, Zaid Harchaoui

We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images.

Quantization

A Survey of Deep Learning for Mathematical Reasoning

1 code implementation20 Dec 2022 Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, Kai-Wei Chang

Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life.

Deep Learning Math +2

Generating Sequences by Learning to Self-Correct

no code implementations31 Oct 2022 Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, Yejin Choi

Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesirable content.

Language Modeling Language Modelling +1

Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs

3 code implementations21 Oct 2022 Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample

In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.

Ranked #3 on Automated Theorem Proving on miniF2F-valid (Pass@100 metric)

Automated Theorem Proving Language Modeling +1

Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering

1 code implementation6 Oct 2022 Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi

Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of commonsense knowledge elicited from GPT-3.

Question Answering Reinforcement Learning (RL)

NaturalProver: Grounded Mathematical Proof Generation with Language Models

1 code implementation25 May 2022 Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi

Theorem proving in natural mathematical language - the mixture of symbolic and natural language used by humans - plays a central role in mathematical advances and education, and tests aspects of reasoning that are core to intelligence.

Automated Theorem Proving Language Modeling +1

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations

no code implementations24 May 2022 JaeHun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi

Despite their impressive capabilities, large pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this.

COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics

2 code implementations23 Feb 2022 Lianhui Qin, Sean Welleck, Daniel Khashabi, Yejin Choi

Many applications of text generation require incorporating different constraints to control the semantics or style of generated text.

counterfactual Counterfactual Reasoning +1

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

1 code implementation NAACL 2022 Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, Yejin Choi

To enable constrained generation, we build on NeuroLogic decoding (Lu et al., 2021), combining its flexibility in incorporating logical constraints with A*esque estimates of future constraint satisfaction.

Machine Translation Table-to-Text Generation

Generated Knowledge Prompting for Commonsense Reasoning

1 code implementation ACL 2022 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models.

Language Modeling Language Modelling +1

Symbolic Brittleness in Sequence Models: on Systematic Generalization in Symbolic Mathematics

1 code implementation28 Sep 2021 Sean Welleck, Peter West, Jize Cao, Yejin Choi

Neural sequence models trained with maximum likelihood estimation have led to breakthroughs in many tasks, where success is defined by the gap between training and test performance.

Out-of-Distribution Generalization Systematic Generalization

Mode recovery in neural autoregressive sequence modeling

1 code implementation ACL (spnlp) 2021 Ilia Kulikov, Sean Welleck, Kyunghyun Cho

We propose to study these phenomena by investigating how the modes, or local maxima, of a distribution are maintained throughout the full learning chain of the ground-truth, empirical, learned and decoding-induced distributions, via the newly proposed mode recovery cost.

NaturalProofs: Mathematical Theorem Proving in Natural Language

1 code implementation24 Mar 2021 Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.

Automated Theorem Proving Domain Generalization +3

MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers

5 code implementations NeurIPS 2021 Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, Zaid Harchaoui

As major progress is made in open-ended text generation, measuring how close machine-generated text is to human language remains a critical open problem.

Text Generation

MLE-guided parameter search for task loss minimization in neural sequence modeling

1 code implementation4 Jun 2020 Sean Welleck, Kyunghyun Cho

Typical approaches to directly optimizing the task loss such as policy gradient and minimum risk training are based around sampling in the sequence space to obtain candidate update directions that are scored based on the loss of a single sequence.

Machine Translation

Consistency of a Recurrent Language Model With Respect to Incomplete Decoding

1 code implementation EMNLP 2020 Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, Kyunghyun Cho

Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition.

Language Modeling Language Modelling

Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training

1 code implementation ACL 2020 Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, Jason Weston

Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address.

Neural Text Generation with Unlikelihood Training

6 code implementations ICLR 2020 Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, Jason Weston

Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core.

Blocking Text Generation

A Generalized Framework of Sequence Generation with Application to Undirected Sequence Models

1 code implementation29 May 2019 Elman Mansimov, Alex Wang, Sean Welleck, Kyunghyun Cho

We investigate this problem by proposing a generalized model of sequence generation that unifies decoding in directed and undirected models.

Machine Translation Natural Language Inference +3

Sequential Graph Dependency Parser

no code implementations RANLP 2019 Sean Welleck, Kyunghyun Cho

We propose a method for non-projective dependency parsing by incrementally predicting a set of edges.

Dependency Parsing

Non-Monotonic Sequential Text Generation

1 code implementation WS 2019 Sean Welleck, Kianté Brantley, Hal Daumé III, Kyunghyun Cho

Standard sequential generation methods assume a pre-specified generation order, such as text generation methods which generate words from left to right.

Imitation Learning Position +1

Loss Functions for Multiset Prediction

no code implementations ICLR 2018 Sean Welleck, Zixin Yao, Yu Gai, Jialin Mao, Zheng Zhang, Kyunghyun Cho

In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making.

Decision Making Prediction +3

Cannot find the paper you are looking for? You can Submit a new open access paper.