Search Results for author: Marin Soljačić

Found 18 papers, 11 papers with code

Deep Learning and Symbolic Regression for Discovering Parametric Equations

no code implementations1 Jul 2022 Michael Zhang, Samuel Kim, Peter Y. Lu, Marin Soljačić

Symbolic regression is a machine learning technique that can learn the governing formulas of data and thus has the potential to transform scientific discovery.

BIG-bench Machine Learning Symbolic Regression

Topogivity: A Machine-Learned Chemical Rule for Discovering Topological Materials

no code implementations10 Feb 2022 Andrew Ma, Yang Zhang, Thomas Christensen, Hoi Chun Po, Li Jing, Liang Fu, Marin Soljačić

Topological materials present unconventional electronic properties that make them attractive for both basic science and next-generation technological applications.

End-to-End Optimization of Metasurfaces for Imaging with Compressed Sensing

no code implementations28 Jan 2022 Gaurav Arya, William F. Li, Charles Roques-Carmes, Marin Soljačić, Steven G. Johnson, Zin Lin

We present a method for the end-to-end optimization of computational imaging systems that reconstruct targets using compressed sensing.

Meta-Learning and Self-Supervised Pretraining for Real World Image Translation

no code implementations22 Dec 2021 Ileana Rugina, Rumen Dangovski, Mark Veillette, Pooya Khorrami, Brian Cheung, Olga Simek, Marin Soljačić

In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains.

Image-to-Image Translation Meta-Learning +2

Equivariant Contrastive Learning

2 code implementations28 Oct 2021 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljačić

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Contrastive Learning Self-Supervised Learning

Discovering Sparse Interpretable Dynamics from Partial Observations

1 code implementation22 Jul 2021 Peter Y. Lu, Joan Ariño, Marin Soljačić

Identifying the governing equations of a nonlinear dynamical system is key to both understanding the physical features of the system and constructing an accurate model of the dynamics that generalizes well beyond the available data.

Deep Learning for Bayesian Optimization of Scientific Problems with High-Dimensional Structure

1 code implementation23 Apr 2021 Samuel Kim, Peter Y. Lu, Charlotte Loh, Jamie Smith, Jasper Snoek, Marin Soljačić

Bayesian optimization (BO) is a popular paradigm for global optimization of expensive black-box functions, but there are many domains where the function is not completely a black-box.

Gaussian Processes

Casimir light in dispersive nanophotonics

no code implementations4 Jan 2021 Jamison Sloan, Nicholas Rivera, John D. Joannopoulos, Marin Soljačić

Despite interest for their potential applications as sources of quantum light, DVEs are generally very weak, providing many opportunities for enhancement through modern techniques in nanophotonics, such as using media which support excitations such as plasmon and phonon polaritons.

Optics

On a Novel Application of Wasserstein-Procrustes for Unsupervised Cross-Lingual Learning

1 code implementation18 Jul 2020 Guillem Ramírez, Rumen Dangovski, Preslav Nakov, Marin Soljačić

We believe that our rethinking of the Wasserstein-Procrustes problem could enable further research, thus helping to develop better algorithms for aligning word embeddings across languages.

Natural Language Processing Word Embeddings

Contextualizing Enhances Gradient Based Meta Learning

no code implementations17 Jul 2020 Evan Vogelbaum, Rumen Dangovski, Li Jing, Marin Soljačić

We propose the implementation of contextualizers, which are generalizable prototypes that adapt to given examples and play a larger role in classification for gradient-based models.

Few-Shot Learning

Integration of Neural Network-Based Symbolic Regression in Deep Learning for Scientific Discovery

1 code implementation10 Dec 2019 Samuel Kim, Peter Y. Lu, Srijon Mukherjee, Michael Gilbert, Li Jing, Vladimir Čeperić, Marin Soljačić

We find that the EQL-based architecture can extrapolate quite well outside of the training data set compared to a standard neural network-based architecture, paving the way for deep learning to be applied in scientific exploration and discovery.

Arithmetic Explainable Models +2

Extracting Interpretable Physical Parameters from Spatiotemporal Systems using Unsupervised Learning

1 code implementation13 Jul 2019 Peter Y. Lu, Samuel Kim, Marin Soljačić

Our method for discovering interpretable latent parameters in spatiotemporal systems will allow us to better analyze and understand real-world phenomena and datasets, which often have unknown and uncontrolled variables that alter the system dynamics and cause varying behaviors that are difficult to disentangle.

Gated Orthogonal Recurrent Units: On Learning to Forget

1 code implementation8 Jun 2017 Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljačić, Yoshua Bengio

We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory.

Ranked #7 on Question Answering on bAbi (Accuracy (trained on 1k) metric)

Denoising Question Answering

Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs

6 code implementations ICML 2017 Li Jing, Yichen Shen, Tena Dubček, John Peurifoy, Scott Skirlo, Yann Lecun, Max Tegmark, Marin Soljačić

Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data.

Permuted-MNIST

Binary matrices of optimal autocorrelations as alignment marks

no code implementations29 Aug 2014 Scott A. Skirlo, Ling Lu, Marin Soljačić

We define a new class of binary matrices by maximizing the peak-sidelobe distances in the aperiodic autocorrelations.

Cannot find the paper you are looking for? You can Submit a new open access paper.