no code implementations • 4 Jan 2023 • Isaac Tamblyn, Tengkai Yu, Ian Benlolo
We discuss our simulation tool, fintech-kMC, which is designed to generate synthetic data for machine learning model development and testing.
1 code implementation • 16 May 2022 • Stephen Whitelam, Viktor Selin, Ian Benlolo, Corneel Casert, Isaac Tamblyn
We examine the zero-temperature Metropolis Monte Carlo algorithm as a tool for training a neural network by minimizing a loss function.
no code implementations • 9 May 2022 • Kevin Ryczko, Jaron T. Krogel, Isaac Tamblyn
We present two machine learning methodologies that are capable of predicting diffusion Monte Carlo (DMC) energies with small datasets (~60 DMC calculations in total).
no code implementations • 5 Apr 2022 • Mohammad Sajjad Ghaemi, Karl Grantham, Isaac Tamblyn, Yifeng Li, Hsu Kiang Ooi
Deploying generative machine learning techniques to generate novel chemical structures based on molecular fingerprint representation has been well established in molecular design.
no code implementations • 10 Mar 2022 • Stephen Whitelam, Isaac Tamblyn
We show that cellular automata can classify data by inducing a form of dynamical phase coexistence.
1 code implementation • 17 Feb 2022 • Corneel Casert, Isaac Tamblyn, Stephen Whitelam
We show that a neural network originally designed for language processing can learn the dynamical rules of a stochastic system by observation of a single dynamical trajectory of the system, and can accurately predict its emergent behavior under conditions not observed during training.
no code implementations • 29 Dec 2021 • Chris Beeler, Xinkai Li, Colin Bellinger, Mark Crowley, Maia Fraser, Isaac Tamblyn
Using a novel toy nautical navigation environment, we show that dynamic programming can be used when only incomplete information about a partially observed Markov decision process (POMDP) is known.
no code implementations • 14 Dec 2021 • Colin Bellinger, Andriy Drozdyuk, Mark Crowley, Isaac Tamblyn
The use of reinforcement learning (RL) in scientific applications, such as materials design and automated chemistry, is increasing.
no code implementations • 11 Jun 2021 • Sebastian J. Wetzel, Roger G. Melko, Isaac Tamblyn
Twin neural network regression (TNNR) is a semi-supervised regression algorithm, it can be trained on unlabelled data points as long as other, labelled anchor data points, are present.
1 code implementation • 14 Apr 2021 • Pedram Abdolghader, Andrew Ridsdale, Tassos Grammatikopoulos, Gavin Resch, Francois Legare, Albert Stolow, Adrian F. Pegoraro, Isaac Tamblyn
Hyperspectral stimulated Raman scattering (SRS) microscopy is a label-free technique for biomedical and mineralogical imaging which can suffer from low signal to noise ratios.
1 code implementation • 5 Mar 2021 • Matteo Aldeghi, Florian Häse, Riley J. Hickman, Isaac Tamblyn, Alán Aspuru-Guzik
Design of experiment and optimization algorithms are often adopted to solve these tasks efficiently.
no code implementations • 23 Feb 2021 • Kyle Mills, Isaac Tamblyn
Using images labelled with only the counts of the objects present, the structure of the extensive deep neural network can be exploited to perform localization of the objects within the visual field.
no code implementations • 12 Jan 2021 • Hitarth Choubisa, Petar Todorović, Joao M. Pina, Darshan H. Parmar, Ziliang Li, Oleksandr Voznyy, Isaac Tamblyn, Edward Sargent
To provide guidance in experimental materials synthesis, these need to be coupled with an accurate yet effective search algorithm and training data consistent with experimental observations.
no code implementations • 29 Dec 2020 • Sebastian J. Wetzel, Kevin Ryczko, Roger G. Melko, Isaac Tamblyn
The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points.
no code implementations • 22 Dec 2020 • Stephen Whitelam, Isaac Tamblyn
Within simulations of molecules deposited on a surface we show that neuroevolutionary learning can design particles and time-dependent protocols to promote self-assembly, without input from physical concepts such as thermal equilibrium or mechanical stability and without prior knowledge of candidate or competing structures.
no code implementations • 17 Nov 2020 • Corneel Casert, Tom Vieijra, Stephen Whitelam, Isaac Tamblyn
We use a neural network ansatz originally designed for the variational optimization of quantum systems to study dynamical large deviations in classical ones.
no code implementations • 27 Oct 2020 • Pascal Friederich, Mario Krenn, Isaac Tamblyn, Alan Aspuru-Guzik
Machine learning with application to questions in the physical sciences has become a widely used tool, successfully applied to classification, regression and optimization tasks in many areas.
no code implementations • 15 Aug 2020 • Stephen Whitelam, Viktor Selin, Sang-Won Park, Isaac Tamblyn
We show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise.
no code implementations • 26 May 2020 • Colin Bellinger, Rory Coles, Mark Crowley, Isaac Tamblyn
Our empirical evaluation demonstrates that Amrl-Q agents are able to learn a policy and state estimator in parallel during online training.
1 code implementation • 15 Apr 2020 • Colin Bellinger, Rory Coles, Mark Crowley, Isaac Tamblyn
Reinforcement learning (RL) has been demonstrated to have great potential in many applications of scientific discovery and design.
1 code implementation • 3 Mar 2020 • Kyle Sprague, Juan Carrasquilla, Steve Whitelam, Isaac Tamblyn
Transfer learning refers to the use of knowledge gained while solving a machine learning task and applying it to the solution of a closely related problem.
no code implementations • 18 Dec 2019 • Stephen Whitelam, Isaac Tamblyn
We show that neural networks trained by evolutionary reinforcement learning can enact efficient molecular self-assembly protocols.
no code implementations • 2 Sep 2019 • Stephen Whitelam, Daniel Jacobson, Isaac Tamblyn
We show how to calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning.
1 code implementation • 20 Mar 2019 • Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn
Gradient-based reinforcement learning is able to learn the Stirling cycle, whereas an evolutionary approach achieves the optimal Carnot cycle.
no code implementations • 23 Oct 2017 • Kyle Mills, Isaac Tamblyn
We demonstrate that a generative adversarial network can be trained to produce Ising model configurations in distinct regions of phase space.
Statistical Mechanics
no code implementations • 17 Aug 2017 • Kyle Mills, Kevin Ryczko, Iryna Luchak, Adam Domurad, Chris Beeler, Isaac Tamblyn
We demonstrate the application of EDNNs to three physical systems: the Ising model and two hexagonal/graphene-like datasets.
Computational Physics Materials Science
1 code implementation • 5 Feb 2017 • Kyle Mills, Michael Spanner, Isaac Tamblyn
We have trained a deep (convolutional) neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials.