Search Results for author: Rumen Dangovski

Found 13 papers, 7 papers with code

Meta-Learning and Self-Supervised Pretraining for Real World Image Translation

no code implementations22 Dec 2021 Ileana Rugina, Rumen Dangovski, Mark Veillette, Pooya Khorrami, Brian Cheung, Olga Simek, Marin Soljačić

In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains.

Image-to-Image Translation Meta-Learning +2

Equivariant Contrastive Learning

2 code implementations28 Oct 2021 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljačić

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Contrastive Learning Self-Supervised Learning

Surrogate- and invariance-boosted contrastive learning for data-scarce applications in science

1 code implementation15 Oct 2021 Charlotte Loh, Thomas Christensen, Rumen Dangovski, Samuel Kim, Marin Soljacic

Deep learning techniques have been increasingly applied to the natural sciences, e. g., for property prediction and optimization or material discovery.

Contrastive Learning

Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations

no code implementations ICLR 2022 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljacic

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Self-Supervised Learning

On a Novel Application of Wasserstein-Procrustes for Unsupervised Cross-Lingual Learning

1 code implementation18 Jul 2020 Guillem Ramírez, Rumen Dangovski, Preslav Nakov, Marin Soljačić

We believe that our rethinking of the Wasserstein-Procrustes problem could enable further research, thus helping to develop better algorithms for aligning word embeddings across languages.

Natural Language Processing Word Embeddings

Contextualizing Enhances Gradient Based Meta Learning

no code implementations17 Jul 2020 Evan Vogelbaum, Rumen Dangovski, Li Jing, Marin Soljačić

We propose the implementation of contextualizers, which are generalizable prototypes that adapt to given examples and play a larger role in classification for gradient-based models.

Few-Shot Learning

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications

no code implementations TACL 2019 Rumen Dangovski, Li Jing, Preslav Nakov, Mi{\'c}o Tatalovi{\'c}, Marin Solja{\v{c}}i{\'c}

Stacking long short-term memory (LSTM) cells or gated recurrent units (GRUs) as part of a recurrent neural network (RNN) has become a standard approach to solving a number of tasks ranging from language modeling to text summarization.

Language Modelling Text Summarization

WaveletNet: Logarithmic Scale Efficient Convolutional Neural Networks for Edge Devices

no code implementations28 Nov 2018 Li Jing, Rumen Dangovski, Marin Soljacic

We present a logarithmic-scale efficient convolutional neural network architecture for edge devices, named WaveletNet.

General Classification

Rotational Unit of Memory

2 code implementations ICLR 2018 Rumen Dangovski, Li Jing, Marin Soljacic

We evaluate our model on synthetic memorization, question answering and language modeling tasks.

Ranked #5 on Question Answering on bAbi (Accuracy (trained on 1k) metric)

Machine Translation Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.