no code implementations • 26 May 2022 • K. D. Humbird, J. L. Peterson
Transfer learning is a promising approach to creating predictive models that incorporate simulation and experimental data into a common framework.
no code implementations • 19 Mar 2021 • K. D. Humbird, J. L. Peterson, J. Salmonson, B. K. Spears
In the context of ICF design, neural network models trained on large simulation databases and partially retrained on experimental data, producing models that are far more accurate than simulations alone.
no code implementations • 14 Dec 2018 • K. D. Humbird, J. L. Peterson, R. G. McClarren
We introduce the idea of hierarchical transfer learning, in which neural networks trained on low fidelity models are calibrated to high fidelity models, then to experimental data.
no code implementations • 14 Nov 2018 • K. D. Humbird, J. L. Peterson, R. G. McClarren
In this work, sequence-to-sequence (seq2seq) models, originally developed for language translation, are used to predict the temporal evolution of complex, multi-physics computer simulations.
1 code implementation • 3 Jul 2017 • K. D. Humbird, J. L. Peterson, R. G. McClarren
By combining the user-friendly features of decision tree models with the flexibility and scalability of deep neural networks, DJINN is an attractive algorithm for training predictive models on a wide range of complex datasets.