Search Results for author: Hunter Lang

Found 17 papers, 7 papers with code

Learning to Decode Collaboratively with Multiple Language Models

1 code implementation6 Mar 2024 Shannon Zejiang Shen, Hunter Lang, Bailin Wang, Yoon Kim, David Sontag

We propose a method to teach multiple large language models (LLM) to collaborate by interleaving their generations at the token level.

Instruction Following

Who Should Predict? Exact Algorithms For Learning to Defer to Humans

1 code implementation15 Jan 2023 Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro Das, David Sontag

We show that prior approaches can fail to find a human-AI system with low misclassification error even when there exists a linear classifier and rejector that have zero error (the realizable setting).

Training Subset Selection for Weak Supervision

1 code implementation6 Jun 2022 Hunter Lang, Aravindan Vijayaraghavan, David Sontag

Subset selection applies to any label model and classifier and is very simple to plug in to existing weak supervision pipelines, requiring just a few lines of code.

Large Language Models are Few-Shot Clinical Information Extractors

no code implementations25 May 2022 Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, David Sontag

A long-running goal of the clinical NLP community is the extraction of important variables trapped in clinical notes.

Benchmarking coreference-resolution +4

Co-training Improves Prompt-based Learning for Large Language Models

1 code implementation2 Feb 2022 Hunter Lang, Monica Agrawal, Yoon Kim, David Sontag

We demonstrate that co-training (Blum & Mitchell, 1998) can improve the performance of prompt-based learning by using unlabeled data.

Zero-Shot Learning

Leveraging Time Irreversibility with Order-Contrastive Pre-training

no code implementations4 Nov 2021 Monica Agrawal, Hunter Lang, Michael Offin, Lior Gazit, David Sontag

Label-scarce, high-dimensional domains such as healthcare present a challenge for modern machine learning techniques.

Self-Supervised Learning

Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning

no code implementations27 Jul 2021 Hoifung Poon, Hai Wang, Hunter Lang

We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning.

Active Learning Language Modelling +5

Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

no code implementations26 Feb 2021 Hunter Lang, Aravind Reddy, David Sontag, Aravindan Vijayaraghavan

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation.

Graph cuts always find a global optimum for Potts models (with a catch)

no code implementations7 Nov 2020 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance.

Statistical Adaptive Stochastic Gradient Methods

1 code implementation25 Feb 2020 Pengchuan Zhang, Hunter Lang, Qiang Liu, Lin Xiao

We propose a statistical adaptive procedure called SALSA for automatically scheduling the learning rate (step size) in stochastic gradient methods.

Scheduling

Statistical Adaptive Stochastic Optimization

no code implementations25 Sep 2019 Pengchuan Zhang, Hunter Lang, Qiang Liu, Lin Xiao

We investigate statistical methods for automatically scheduling the learning rate (step size) in stochastic optimization.

Scheduling Stochastic Optimization

Using Statistics to Automate Stochastic Optimization

no code implementations NeurIPS 2019 Hunter Lang, Pengchuan Zhang, Lin Xiao

Despite the development of numerous adaptive optimizers, tuning the learning rate of stochastic gradient methods remains a major roadblock to obtaining good practical performance in machine learning.

Stochastic Optimization

Block Stability for MAP Inference

no code implementations12 Oct 2018 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

The simplest stability condition assumes that the MAP solution does not change at all when some of the pairwise potentials are (adversarially) perturbed.

Optimality of Approximate Inference Algorithms on Stable Instances

no code implementations6 Nov 2017 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

Approximate algorithms for structured prediction problems---such as LP relaxations and the popular alpha-expansion algorithm (Boykov et al. 2001)---typically far exceed their theoretical performance guarantees on real-world instances.

Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.