Search Results for author: Kevin Miller

Found 13 papers, 5 papers with code

Dirichlet Active Learning

no code implementations9 Nov 2023 Kevin Miller, Ryan Murray

This work introduces Dirichlet Active Learning (DiAL), a Bayesian-inspired approach to the design of active learning algorithms.

Active Learning Graph Learning

Novel Batch Active Learning Approach and Its Application to Synthetic Aperture Radar Datasets

1 code implementation19 Jul 2023 James Chapman, Bohan Chen, Zheng Tan, Jeff Calder, Kevin Miller, Andrea L. Bertozzi

Active learning improves the performance of machine learning methods by judiciously selecting a limited number of unlabeled data points to query for labels, with the aim of maximally improving the underlying classifier's performance.

Active Learning Graph Learning +1

Graph-based Active Learning for Surface Water and Sediment Detection in Multispectral Images

no code implementations17 Jun 2023 Bohan Chen, Kevin Miller, Andrea L. Bertozzi, Jon Schwenk

We develop a graph active learning pipeline (GAP) to detect surface water and in-river sediment pixels in satellite images.

Active Learning

A Unified Theory of Dual-Process Control

no code implementations13 Nov 2022 Ted Moskovitz, Kevin Miller, Maneesh Sahani, Matthew M. Botvinick

We apply a single model based on this observation to findings from research on executive control, reward-based learning, and judgment and decision making, showing that seemingly diverse dual-process phenomena can be understood as domain-specific consequences of a single underlying set of computational principles.

Decision Making

Poisson Reweighted Laplacian Uncertainty Sampling for Graph-based Active Learning

1 code implementation27 Oct 2022 Kevin Miller, Jeff Calder

We show that uncertainty sampling is sufficient to achieve exploration versus exploitation in graph-based active learning, as long as the measure of uncertainty properly aligns with the underlying model and the model properly reflects uncertainty in unexplored regions.

Active Learning Image Classification

Replay and compositional computation

no code implementations15 Sep 2022 Zeb Kurth-Nelson, Timothy Behrens, Greg Wayne, Kevin Miller, Lennart Luettgau, Ray Dolan, Yunzhe Liu, Philipp Schwartenbeck

Replay in the brain has been viewed as rehearsal, or, more recently, as sampling from a transition model.


Graph-based Active Learning for Semi-supervised Classification of SAR Data

1 code implementation31 Mar 2022 Kevin Miller, John Mauro, Jason Setiadi, Xoaquin Baca, Zhan Shi, Jeff Calder, Andrea L. Bertozzi

We use a Convolutional Neural Network Variational Autoencoder (CNNVAE) to embed SAR data into a feature space, and then construct a similarity graph from the embedded data and apply graph-based semi-supervised learning techniques.

Active Learning graph construction +1

Efficient and Reliable Overlay Networks for Decentralized Federated Learning

no code implementations12 Dec 2021 Yifan Hua, Kevin Miller, Andrea L. Bertozzi, Chen Qian, Bao Wang

As such, our proposed overlay networks accelerate convergence, improve generalization, and enhance robustness to clients failures in DFL with theoretical guarantees.

Federated Learning Generalization Bounds +2

Model-Change Active Learning in Graph-Based Semi-Supervised Learning

2 code implementations14 Oct 2021 Kevin Miller, Andrea L. Bertozzi

Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.

Active Learning

Posterior Consistency of Semi-Supervised Regression on Graphs

no code implementations25 Jul 2020 Andrea L. Bertozzi, Bamdad Hosseini, Hao Li, Kevin Miller, Andrew M. Stuart

Graph-based semi-supervised regression (SSR) is the problem of estimating the value of a function on a weighted graph from its values (labels) on a small subset of the vertices.

Clustering regression

Efficient Graph-Based Active Learning with Probit Likelihood via Gaussian Approximations

no code implementations21 Jul 2020 Kevin Miller, Hao Li, Andrea L. Bertozzi

We present a novel adaptation of active learning to graph-based semi-supervised learning (SSL) under non-Gaussian Bayesian models.

Active Learning

Forward Thinking: Building Deep Random Forests

2 code implementations20 May 2017 Kevin Miller, Chris Hettinger, Jeffrey Humpherys, Tyler Jarvis, David Kartchner

We present a general framework called forward thinking for deep learning that generalizes the architectural flexibility and sophistication of deep neural networks while also allowing for (i) different types of learning functions in the network, other than neurons, and (ii) the ability to adaptively deepen the network as needed to improve results.

Cannot find the paper you are looking for? You can Submit a new open access paper.