Search Results for author: Jordan T. Ash

Found 14 papers, 7 papers with code

The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction

1 code implementation21 Dec 2023 Pratyusha Sharma, Jordan T. Ash, Dipendra Misra

Transformer-based Large Language Models (LLMs) have become a fixture in modern machine learning.

Neural Active Learning on Heteroskedastic Distributions

1 code implementation2 Nov 2022 Savya Khosla, Chew Kin Whye, Jordan T. Ash, Cyril Zhang, Kenji Kawaguchi, Alex Lamb

To this end, we demonstrate the catastrophic failure of these active learning algorithms on heteroskedastic distributions and propose a fine-tuning-based approach to mitigate these failures.

Active Learning

Eigen Memory Trees

1 code implementation25 Oct 2022 Mark Rucker, Jordan T. Ash, John Langford, Paul Mineiro, Ida Momennejad

This work introduces the Eigen Memory Tree (EMT), a novel online memory model for sequential learning scenarios.

Transformers Learn Shortcuts to Automata

no code implementations19 Oct 2022 Bingbin Liu, Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, Cyril Zhang

Algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the Turing machine.

Anti-Concentrated Confidence Bonuses for Scalable Exploration

no code implementations ICLR 2022 Jordan T. Ash, Cyril Zhang, Surbhi Goel, Akshay Krishnamurthy, Sham Kakade

Intrinsic rewards play a central role in handling the exploration-exploitation trade-off when designing sequential decision-making algorithms, in both foundational theory and state-of-the-art deep reinforcement learning.

Decision Making reinforcement-learning +1

Gone Fishing: Neural Active Learning with Fisher Embeddings

1 code implementation NeurIPS 2021 Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, Sham Kakade

There is an increasing need for effective active learning algorithms that are compatible with deep neural networks.

Active Learning

On Warm-Starting Neural Network Training

1 code implementation NeurIPS 2020 Jordan T. Ash, Ryan P. Adams

We would like each of these models in the sequence to be performant and take advantage of all the data that are available to that point.

Experimental Design

On The Difficulty of Warm-Starting Neural Network Training

no code implementations25 Sep 2019 Jordan T. Ash, Ryan P. Adams

We would like each of these models in the sequence to be performant and take advantage of all the data that are available to that point.

Experimental Design

Unsupervised Domain Adaptation Using Approximate Label Matching

no code implementations16 Feb 2016 Jordan T. Ash, Robert E. Schapire, Barbara E. Engelhardt

Domain adaptation addresses the problem created when training data is generated by a so-called source distribution, but test data is generated by a significantly different target distribution.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.