Search Results for author: Nathan Beck

Found 7 papers, 4 papers with code

STENCIL: Submodular Mutual Information Based Weak Supervision for Cold-Start Active Learning

1 code implementation21 Feb 2024 Nathan Beck, Adithya Iyer, Rishabh Iyer

As supervised fine-tuning of pre-trained models within NLP applications increases in popularity, larger corpora of annotated data are required, especially with increasing parameter counts in large language models.

Active Learning text-classification +1

Theoretical Analysis of Submodular Information Measures for Targeted Data Subset Selection

no code implementations21 Feb 2024 Nathan Beck, Truong Pham, Rishabh Iyer

With increasing volume of data being used across machine learning tasks, the capability to target specific subsets of data becomes more important.

Beyond Active Learning: Leveraging the Full Potential of Human Interaction via Auto-Labeling, Human Correction, and Human Verification

no code implementations2 Jun 2023 Nathan Beck, KrishnaTeja Killamsetty, Suraj Kothawade, Rishabh Iyer

Active Learning (AL) is a human-in-the-loop framework to interactively and adaptively label data instances, thereby enabling significant gains in model performance compared to random sampling.

Active Learning

STREAMLINE: Streaming Active Learning for Realistic Multi-Distributional Settings

1 code implementation18 May 2023 Nathan Beck, Suraj Kothawade, Pradeep Shenoy, Rishabh Iyer

However, learning unbiased models depends on building a dataset that is representative of a diverse range of realistic scenarios for a given task.

Active Learning Autonomous Vehicles +3

Effective Evaluation of Deep Active Learning on Image Classification Tasks

no code implementations16 Jun 2021 Nathan Beck, Durga Sivasubramanian, Apurva Dani, Ganesh Ramakrishnan, Rishabh Iyer

Issues in the current literature include sometimes contradictory observations on the performance of different AL algorithms, unintended exclusion of important generalization approaches such as data augmentation and SGD for optimization, a lack of study of evaluation facets like the labeling efficiency of AL, and little or no clarity on the scenarios in which AL outperforms random sampling (RS).

Active Learning Benchmarking +3

Cannot find the paper you are looking for? You can Submit a new open access paper.