Search Results for author: Kevin Miller

Found 7 papers, 3 papers with code

Graph-based Active Learning for Semi-supervised Classification of SAR Data

1 code implementation31 Mar 2022 Kevin Miller, John Mauro, Jason Setiadi, Xoaquin Baca, Zhan Shi, Jeff Calder, Andrea L. Bertozzi

We use a Convolutional Neural Network Variational Autoencoder (CNNVAE) to embed SAR data into a feature space, and then construct a similarity graph from the embedded data and apply graph-based semi-supervised learning techniques.

Active Learning graph construction +1

Efficient and Reliable Overlay Networks for Decentralized Federated Learning

no code implementations12 Dec 2021 Yifan Hua, Kevin Miller, Andrea L. Bertozzi, Chen Qian, Bao Wang

As such, our proposed overlay networks accelerate convergence, improve generalization, and enhance robustness to clients failures in DFL with theoretical guarantees.

Federated Learning Generalization Bounds +2

Model-Change Active Learning in Graph-Based Semi-Supervised Learning

2 code implementations14 Oct 2021 Kevin Miller, Andrea L. Bertozzi

Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.

Active Learning

Posterior Consistency of Semi-Supervised Regression on Graphs

no code implementations25 Jul 2020 Andrea L. Bertozzi, Bamdad Hosseini, Hao Li, Kevin Miller, Andrew M. Stuart

Graph-based semi-supervised regression (SSR) is the problem of estimating the value of a function on a weighted graph from its values (labels) on a small subset of the vertices.

Efficient Graph-Based Active Learning with Probit Likelihood via Gaussian Approximations

no code implementations21 Jul 2020 Kevin Miller, Hao Li, Andrea L. Bertozzi

We present a novel adaptation of active learning to graph-based semi-supervised learning (SSL) under non-Gaussian Bayesian models.

Active Learning

Forward Thinking: Building Deep Random Forests

2 code implementations20 May 2017 Kevin Miller, Chris Hettinger, Jeffrey Humpherys, Tyler Jarvis, David Kartchner

We present a general framework called forward thinking for deep learning that generalizes the architectural flexibility and sophistication of deep neural networks while also allowing for (i) different types of learning functions in the network, other than neurons, and (ii) the ability to adaptively deepen the network as needed to improve results.

Cannot find the paper you are looking for? You can Submit a new open access paper.