Search Results for author: Nathan Wycoff

Found 9 papers, 3 papers with code

Voronoi Candidates for Bayesian Optimization

1 code implementation7 Feb 2024 Nathan Wycoff, John W. Smith, Annie S. Booth, Robert B. Gramacy

Bayesian optimization (BO) offers an elegant approach for efficiently optimizing black-box functions.

Bayesian Optimization Gaussian Processes

Surrogate Active Subspaces for Jump-Discontinuous Functions

no code implementations17 Oct 2023 Nathan Wycoff

But given that active subspaces are defined by way of gradients, it is not clear what quantity is being estimated when this methodology is applied to a discontinuous simulator.

Sparse Bayesian Lasso via a Variable-Coefficient $\ell_1$ Penalty

no code implementations9 Nov 2022 Nathan Wycoff, Ali Arab, Katharine M. Donato, Lisa O. Singh

Modern statistical learning algorithms are capable of amazing flexibility, but struggle with interpretability.

regression Uncertainty Quantification

Triangulation candidates for Bayesian optimization

1 code implementation14 Dec 2021 Robert B. Gramacy, Annie Sauer, Nathan Wycoff

Bayesian optimization involves "inner optimization" over a new-data acquisition criterion which is non-convex/highly multi-modal, may be non-differentiable, or may otherwise thwart local numerical optimizers.

Bayesian Optimization

Neko: a Library for Exploring Neuromorphic Learning Rules

1 code implementation1 May 2021 Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, Fangfang Xia

To address this gap, we present Neko, a modular, extensible library with a focus on aiding the design of new learning algorithms.

Sensitivity Prewarping for Local Surrogate Modeling

no code implementations15 Jan 2021 Nathan Wycoff, Mickaël Binois, Robert B. Gramacy

In the continual effort to improve product quality and decrease operations costs, computational modeling is increasingly being deployed to determine feasibility of product designs or configurations.

Towards On-Chip Bayesian Neuromorphic Learning

no code implementations5 May 2020 Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia

e-prop 1 is a promising learning algorithm that tackles this with Broadcast Alignment (a technique where network weights are replaced with random weights during feedback) and accumulated local information.

Sequential Learning of Active Subspaces

no code implementations26 Jul 2019 Nathan Wycoff, Mickael Binois, Stefan M. Wild

In such cases, often a surrogate model is employed, on which finite differencing is performed.

Gaussian Processes

Neuromorphic Acceleration for Approximate Bayesian Inference on Neural Networks via Permanent Dropout

no code implementations29 Apr 2019 Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia

We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms.

Bayesian Inference Uncertainty Quantification

Cannot find the paper you are looking for? You can Submit a new open access paper.