1 code implementation • 7 Feb 2024 • Nathan Wycoff, John W. Smith, Annie S. Booth, Robert B. Gramacy
Bayesian optimization (BO) offers an elegant approach for efficiently optimizing black-box functions.
no code implementations • 17 Oct 2023 • Nathan Wycoff
But given that active subspaces are defined by way of gradients, it is not clear what quantity is being estimated when this methodology is applied to a discontinuous simulator.
no code implementations • 9 Nov 2022 • Nathan Wycoff, Ali Arab, Katharine M. Donato, Lisa O. Singh
Modern statistical learning algorithms are capable of amazing flexibility, but struggle with interpretability.
1 code implementation • 14 Dec 2021 • Robert B. Gramacy, Annie Sauer, Nathan Wycoff
Bayesian optimization involves "inner optimization" over a new-data acquisition criterion which is non-convex/highly multi-modal, may be non-differentiable, or may otherwise thwart local numerical optimizers.
1 code implementation • 1 May 2021 • Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, Fangfang Xia
To address this gap, we present Neko, a modular, extensible library with a focus on aiding the design of new learning algorithms.
no code implementations • 15 Jan 2021 • Nathan Wycoff, Mickaël Binois, Robert B. Gramacy
In the continual effort to improve product quality and decrease operations costs, computational modeling is increasingly being deployed to determine feasibility of product designs or configurations.
no code implementations • 5 May 2020 • Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia
e-prop 1 is a promising learning algorithm that tackles this with Broadcast Alignment (a technique where network weights are replaced with random weights during feedback) and accumulated local information.
no code implementations • 26 Jul 2019 • Nathan Wycoff, Mickael Binois, Stefan M. Wild
In such cases, often a surrogate model is employed, on which finite differencing is performed.
no code implementations • 29 Apr 2019 • Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia
We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms.