Search Results for author: Marin Soljacic

Found 11 papers, 5 papers with code

Nanophotonic Particle Simulation and Inverse Design Using Artificial Neural Networks

1 code implementation18 Oct 2017 John Peurifoy, Yichen Shen, Li Jing, Yi Yang, Fidel Cano-Renteria, Brendan Delacy, Max Tegmark, John D. Joannopoulos, Marin Soljacic

We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles.

Computational Physics Applied Physics Optics

Rotational Unit of Memory

2 code implementations ICLR 2018 Rumen Dangovski, Li Jing, Marin Soljacic

We evaluate our model on synthetic memorization, question answering and language modeling tasks.

Ranked #5 on Question Answering on bAbi (Accuracy (trained on 1k) metric)

Language Modelling Machine Translation +4

Migrating Knowledge between Physical Scenarios based on Artificial Neural Networks

no code implementations27 Aug 2018 Yurui Qu, Li Jing, Yichen Shen, Min Qiu, Marin Soljacic

First, we demonstrate that in predicting the transmission from multilayer photonic film, the relative error rate is reduced by 46. 8% (26. 5%) when the source data comes from 10-layer (8-layer) films and the target data comes from 8-layer (10-layer) films.

Multi-Task Learning

WaveletNet: Logarithmic Scale Efficient Convolutional Neural Networks for Edge Devices

no code implementations28 Nov 2018 Li Jing, Rumen Dangovski, Marin Soljacic

We present a logarithmic-scale efficient convolutional neural network architecture for edge devices, named WaveletNet.

General Classification

Discovering Dynamical Parameters by Interpreting Echo State Networks

no code implementations NeurIPS Workshop AI4Scien 2021 Oreoluwa Alao, Peter Y Lu, Marin Soljacic

Reservoir computing architectures known as echo state networks (ESNs) have been shown to have exceptional predictive capabilities when trained on chaotic systems.

Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations

no code implementations ICLR 2022 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljacic

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Self-Supervised Learning

Surrogate- and invariance-boosted contrastive learning for data-scarce applications in science

1 code implementation15 Oct 2021 Charlotte Loh, Thomas Christensen, Rumen Dangovski, Samuel Kim, Marin Soljacic

Deep learning techniques have been increasingly applied to the natural sciences, e. g., for property prediction and optimization or material discovery.

Contrastive Learning Property Prediction

AI-Assisted Discovery of Quantitative and Formal Models in Social Science

1 code implementation2 Oct 2022 Julia Balla, Sihao Huang, Owen Dugan, Rumen Dangovski, Marin Soljacic

In social science, formal and quantitative models, such as ones describing economic growth and collective action, are used to formulate mechanistic explanations, provide predictions, and uncover questions about observed phenomena.

counterfactual Sociology +1

On the Importance of Calibration in Semi-supervised Learning

no code implementations10 Oct 2022 Charlotte Loh, Rumen Dangovski, Shivchander Sudalairaj, Seungwook Han, Ligong Han, Leonid Karlinsky, Marin Soljacic, Akash Srivastava

State-of-the-art (SOTA) semi-supervised learning (SSL) methods have been highly successful in leveraging a mix of labeled and unlabeled data by combining techniques of consistency regularization and pseudo-labeling.

Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries

1 code implementation4 Mar 2023 Charlotte Loh, Seungwook Han, Shivchander Sudalairaj, Rumen Dangovski, Kai Xu, Florian Wenzel, Marin Soljacic, Akash Srivastava

In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters.

Representation Learning Uncertainty Quantification

Model Stitching: Looking For Functional Similarity Between Representations

no code implementations20 Mar 2023 Adriano Hernandez, Rumen Dangovski, Peter Y. Lu, Marin Soljacic

Model stitching (Lenc & Vedaldi 2015) is a compelling methodology to compare different neural network representations, because it allows us to measure to what degree they may be interchanged.

Cannot find the paper you are looking for? You can Submit a new open access paper.