Search Results for author: J. L. Peterson

Found 5 papers, 1 papers with code

Transfer learning driven design optimization for inertial confinement fusion

no code implementations26 May 2022 K. D. Humbird, J. L. Peterson

Transfer learning is a promising approach to creating predictive models that incorporate simulation and experimental data into a common framework.

Bayesian Optimization Transfer Learning

Cognitive simulation models for inertial confinement fusion: Combining simulation and experimental data

no code implementations19 Mar 2021 K. D. Humbird, J. L. Peterson, J. Salmonson, B. K. Spears

In the context of ICF design, neural network models trained on large simulation databases and partially retrained on experimental data, producing models that are far more accurate than simulations alone.

Transfer Learning

Transfer learning to model inertial confinement fusion experiments

no code implementations14 Dec 2018 K. D. Humbird, J. L. Peterson, R. G. McClarren

We introduce the idea of hierarchical transfer learning, in which neural networks trained on low fidelity models are calibrated to high fidelity models, then to experimental data.

Transfer Learning

Predicting the time-evolution of multi-physics systems with sequence-to-sequence models

no code implementations14 Nov 2018 K. D. Humbird, J. L. Peterson, R. G. McClarren

In this work, sequence-to-sequence (seq2seq) models, originally developed for language translation, are used to predict the temporal evolution of complex, multi-physics computer simulations.

Translation

Deep neural network initialization with decision trees

1 code implementation3 Jul 2017 K. D. Humbird, J. L. Peterson, R. G. McClarren

By combining the user-friendly features of decision tree models with the flexibility and scalability of deep neural networks, DJINN is an attractive algorithm for training predictive models on a wide range of complex datasets.

Cannot find the paper you are looking for? You can Submit a new open access paper.