Three decades of research in molecular nanomagnets have raised their magnetic memories from liquid helium to liquid nitrogen temperature thanks to a wise choice of the magnetic ion and coordination environment.
Mesoscale and Nanoscale Physics
no code implementations • 17 Feb 2021 • Avery L. Blockmon, Aman Ullah, Kendall D. Hughey, Yan Duan, Kenneth R. O'Neal, Mykhaylo Ozerov, José J. Baldoví, Juan Aragó, Alejandro Gaita-Ariño, Eugenio Coronado, Janice L. Musfeldt
Molecular vibrations play a key role in magnetic relaxation processes of molecular spin qubits as they couple to spin states, leading to the loss of quantum information.
Mesoscale and Nanoscale Physics
In this paper, we explore a technique, variable skipping, for accelerating range density estimation over deep autoregressive models.
Semi-supervised learning has emerged as an important paradigm in protein modeling due to the high cost of acquiring supervised protein labels, but the current literature is fragmented when it comes to datasets and standardized evaluation techniques.
To produce a truly usable estimator, we develop a Monte Carlo integration scheme on top of autoregressive models that can efficiently handle range queries with dozens of dimensions or more.
Flow-based generative models are powerful exact likelihood models with efficient sampling and inference.
Ranked #13 on Image Generation on ImageNet 32x32 (bpd metric)
Results are presented on a new environment we call `Krazy World': a difficult high-dimensional gridworld which is designed to highlight the importance of correctly differentiating through sampling distributions in meta-reinforcement learning.
To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP.
We consider the problem of exploration in meta reinforcement learning.
In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training.
Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks.
A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration.
Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification.
In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks.
Ranked #1 on Atari Games on Atari 2600 Freeway
The activations of the RNN store the state of the "fast" RL algorithm on the current (previously unseen) MDP.
Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification.
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
Ranked #3 on Image Generation on Stanford Dogs
While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios.
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning.
Ranked #1 on Continuous Control on Inverted Pendulum
Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models.