Quantum many-body physics simulation has important impacts on understanding fundamental science and has applications to quantum materials design and quantum technology.
Studying the dynamics of open quantum systems can enable breakthroughs both in fundamental physics and applications to quantum engineering and quantum computation.
Quantum optimization, a key application of quantum computing, has traditionally been stymied by the linearly increasing complexity of gradient calculations with an increasing number of parameters.
This paper introduces a novel machine learning optimizer called LODO, which tries to online meta-learn the best preconditioner during optimization.
We test this new approach on a variety of physical systems and demonstrate that our method is able to both identify the number of conserved quantities and extract their values.
Symbolic regression is a machine learning technique that can learn the governing formulas of data and thus has the potential to transform scientific discovery.
We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings.
Ranked #13 on Semantic Textual Similarity on STS16
Topological materials present unconventional electronic properties that make them attractive for both basic science and next-generation technological applications.
We present a framework for the end-to-end optimization of metasurface imaging systems that reconstruct targets using compressed sensing, a technique for solving underdetermined imaging problems when the target object exhibits sparsity (i. e. the object can be described by a small number of non-zero values, but the positions of these values are unknown).
In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains.
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.
Identifying the governing equations of a nonlinear dynamical system is key to both understanding the physical features of the system and constructing an accurate model of the dynamics that generalizes well beyond the available data.
Bayesian optimization (BO) is a popular paradigm for global optimization of expensive black-box functions, but there are many domains where the function is not completely a black-box.
Despite interest for their potential applications as sources of quantum light, DVEs are generally very weak, providing many opportunities for enhancement through modern techniques in nanophotonics, such as using media which support excitations such as plasmon and phonon polaritons.
The attention mechanism is a key component of the neural revolution in Natural Language Processing (NLP).
We believe that our rethinking of the Wasserstein-Procrustes problem could enable further research, thus helping to develop better algorithms for aligning word embeddings across languages.
We propose the implementation of contextualizers, which are generalizable prototypes that adapt to given examples and play a larger role in classification for gradient-based models.
Deep learning owes much of its success to the astonishing expressiveness of neural networks.
We find that the EQL-based architecture can extrapolate quite well outside of the training data set compared to a standard neural network-based architecture, paving the way for deep learning to be applied in scientific exploration and discovery.
Our method for discovering interpretable latent parameters in spatiotemporal systems will allow us to better analyze and understand real-world phenomena and datasets, which often have unknown and uncontrolled variables that alter the system dynamics and cause varying behaviors that are difficult to disentangle.
We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory.
Ranked #7 on Question Answering on bAbi (Accuracy (trained on 1k) metric)
Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data.