no code implementations • 9 Oct 2022 • Shraman Pramanick, Li Jing, Sayan Nag, Jiachen Zhu, Hardik Shah, Yann Lecun, Rama Chellappa
Extensive experiments on a wide range of vision- and vision-language downstream tasks demonstrate the effectiveness of VoLTA on fine-grained applications without compromising the coarse-grained downstream performance, often outperforming methods using significantly more caption and box annotations.
no code implementations • 19 Sep 2022 • Yunfei Yin, Li Jing, Faliang Huang, Guangchao Yang, Zhuowei Wang
Most of the existing methods recognize emotions by focusing on local actions over time.
no code implementations • 15 Jun 2022 • Li Jing, Jiachen Zhu, Yann Lecun
Self-supervised learning has shown superior performances over supervised methods on various vision benchmarks.
no code implementations • 1 Jun 2022 • Anish Acharya, Sujay Sanghavi, Li Jing, Bhargav Bhushanam, Dhruv Choudhary, Michael Rabbat, Inderjit Dhillon
Self-supervised pretraining on unlabeled data followed by supervised finetuning on labeled data is a popular paradigm for learning from limited labeled examples.
1 code implementation • 10 Feb 2022 • Andrew Ma, Yang Zhang, Thomas Christensen, Hoi Chun Po, Li Jing, Liang Fu, Marin Soljačić
Topological materials present unconventional electronic properties that make them attractive for both basic science and next-generation technological applications.
2 code implementations • 28 Oct 2021 • Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljačić
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.
1 code implementation • ICLR 2022 • Li Jing, Pascal Vincent, Yann Lecun, Yuandong Tian
It has been shown that non-contrastive methods suffer from a lesser collapse problem of a different nature: dimensional collapse, whereby the embedding vectors end up spanning a lower-dimensional subspace instead of the entire available embedding space.
no code implementations • ICLR 2022 • Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljacic
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.
20 code implementations • 4 Mar 2021 • Jure Zbontar, Li Jing, Ishan Misra, Yann Lecun, Stéphane Deny
This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
Ranked #11 on
Image Classification
on Places205
1 code implementation • 20 Nov 2020 • Ileana Rugina, Rumen Dangovski, Li Jing, Preslav Nakov, Marin Soljačić
The attention mechanism is a key component of the neural revolution in Natural Language Processing (NLP).
no code implementations • EMNLP 2020 • Matthew Khoury, Rumen Dangovski, Longwu Ou, Preslav Nakov, Yichen Shen, Li Jing
To address this issue, we propose a novel vector-vector-matrix architecture (VVMA), which greatly reduces the latency at inference time for NMT.
3 code implementations • NeurIPS 2020 • Li Jing, Jure Zbontar, Yann Lecun
An important component of autoencoders is the method by which the information capacity of the latent representation is minimized or limited.
no code implementations • 17 Jul 2020 • Evan Vogelbaum, Rumen Dangovski, Li Jing, Marin Soljačić
We propose the implementation of contextualizers, which are generalizable prototypes that adapt to given examples and play a larger role in classification for gradient-based models.
1 code implementation • 10 Dec 2019 • Samuel Kim, Peter Y. Lu, Srijon Mukherjee, Michael Gilbert, Li Jing, Vladimir Čeperić, Marin Soljačić
We find that the EQL-based architecture can extrapolate quite well outside of the training data set compared to a standard neural network-based architecture, paving the way for deep learning to be applied in scientific exploration and discovery.
no code implementations • 8 Jul 2019 • Ling suiyi, Li Jing, Le Callet Patrick, Wang Junle
As the immersive multimedia techniques like Free-viewpoint TV (FTV) develop at an astonishing rate, user's demand for high-quality immersive contents increases dramatically.
no code implementations • TACL 2019 • Rumen Dangovski, Li Jing, Preslav Nakov, Mi{\'c}o Tatalovi{\'c}, Marin Solja{\v{c}}i{\'c}
Stacking long short-term memory (LSTM) cells or gated recurrent units (GRUs) as part of a recurrent neural network (RNN) has become a standard approach to solving a number of tasks ranging from language modeling to text summarization.
no code implementations • 28 Nov 2018 • Li Jing, Rumen Dangovski, Marin Soljacic
We present a logarithmic-scale efficient convolutional neural network architecture for edge devices, named WaveletNet.
no code implementations • 27 Aug 2018 • Yurui Qu, Li Jing, Yichen Shen, Min Qiu, Marin Soljacic
First, we demonstrate that in predicting the transmission from multilayer photonic film, the relative error rate is reduced by 46. 8% (26. 5%) when the source data comes from 10-layer (8-layer) films and the target data comes from 8-layer (10-layer) films.
2 code implementations • ICLR 2018 • Rumen Dangovski, Li Jing, Marin Soljacic
We evaluate our model on synthetic memorization, question answering and language modeling tasks.
Ranked #5 on
Question Answering
on bAbi
(Accuracy (trained on 1k) metric)
1 code implementation • 18 Oct 2017 • John Peurifoy, Yichen Shen, Li Jing, Yi Yang, Fidel Cano-Renteria, Brendan Delacy, Max Tegmark, John D. Joannopoulos, Marin Soljacic
We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles.
Computational Physics Applied Physics Optics
1 code implementation • 8 Jun 2017 • Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljačić, Yoshua Bengio
We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory.
Ranked #7 on
Question Answering
on bAbi
(Accuracy (trained on 1k) metric)
6 code implementations • ICML 2017 • Li Jing, Yichen Shen, Tena Dubček, John Peurifoy, Scott Skirlo, Yann Lecun, Max Tegmark, Marin Soljačić
Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data.