Search Results for author: Li Jing

Found 22 papers, 12 papers with code

VoLTA: Vision-Language Transformer with Weakly-Supervised Local-Feature Alignment

1 code implementation9 Oct 2022 Shraman Pramanick, Li Jing, Sayan Nag, Jiachen Zhu, Hardik Shah, Yann Lecun, Rama Chellappa

Extensive experiments on a wide range of vision- and vision-language downstream tasks demonstrate the effectiveness of VoLTA on fine-grained applications without compromising the coarse-grained downstream performance, often outperforming methods using significantly more caption and box annotations.

object-detection Object Detection +2

Masked Siamese ConvNets

no code implementations15 Jun 2022 Li Jing, Jiachen Zhu, Yann Lecun

Self-supervised learning has shown superior performances over supervised methods on various vision benchmarks.

Image Classification Inductive Bias +4

Positive Unlabeled Contrastive Learning

no code implementations1 Jun 2022 Anish Acharya, Sujay Sanghavi, Li Jing, Bhargav Bhushanam, Dhruv Choudhary, Michael Rabbat, Inderjit Dhillon

We extend this paradigm to the classical positive unlabeled (PU) setting, where the task is to learn a binary classifier given only a few labeled positive samples, and (often) a large amount of unlabeled samples (which could be positive or negative).

Contrastive Learning Pseudo Label

Topogivity: A Machine-Learned Chemical Rule for Discovering Topological Materials

1 code implementation10 Feb 2022 Andrew Ma, Yang Zhang, Thomas Christensen, Hoi Chun Po, Li Jing, Liang Fu, Marin Soljačić

Topological materials present unconventional electronic properties that make them attractive for both basic science and next-generation technological applications.

Equivariant Contrastive Learning

2 code implementations28 Oct 2021 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljačić

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Contrastive Learning Self-Supervised Learning

Understanding Dimensional Collapse in Contrastive Self-supervised Learning

1 code implementation ICLR 2022 Li Jing, Pascal Vincent, Yann Lecun, Yuandong Tian

It has been shown that non-contrastive methods suffer from a lesser collapse problem of a different nature: dimensional collapse, whereby the embedding vectors end up spanning a lower-dimensional subspace instead of the entire available embedding space.

Contrastive Learning Learning Theory +2

Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations

no code implementations ICLR 2022 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljacic

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Self-Supervised Learning

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

24 code implementations4 Mar 2021 Jure Zbontar, Li Jing, Ishan Misra, Yann Lecun, Stéphane Deny

This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.

General Classification Object Detection +3

Implicit Rank-Minimizing Autoencoder

3 code implementations NeurIPS 2020 Li Jing, Jure Zbontar, Yann Lecun

An important component of autoencoders is the method by which the information capacity of the latent representation is minimized or limited.

Image Generation Representation Learning +1

Contextualizing Enhances Gradient Based Meta Learning

no code implementations17 Jul 2020 Evan Vogelbaum, Rumen Dangovski, Li Jing, Marin Soljačić

We propose the implementation of contextualizers, which are generalizable prototypes that adapt to given examples and play a larger role in classification for gradient-based models.

Few-Shot Learning

Integration of Neural Network-Based Symbolic Regression in Deep Learning for Scientific Discovery

1 code implementation10 Dec 2019 Samuel Kim, Peter Y. Lu, Srijon Mukherjee, Michael Gilbert, Li Jing, Vladimir Čeperić, Marin Soljačić

We find that the EQL-based architecture can extrapolate quite well outside of the training data set compared to a standard neural network-based architecture, paving the way for deep learning to be applied in scientific exploration and discovery.

Explainable Models regression +1

Perceptual representations of structural information in images: application to quality assessment of synthesized view in FTV scenario

no code implementations8 Jul 2019 Ling suiyi, Li Jing, Le Callet Patrick, Wang Junle

As the immersive multimedia techniques like Free-viewpoint TV (FTV) develop at an astonishing rate, user's demand for high-quality immersive contents increases dramatically.

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications

no code implementations TACL 2019 Rumen Dangovski, Li Jing, Preslav Nakov, Mi{\'c}o Tatalovi{\'c}, Marin Solja{\v{c}}i{\'c}

Stacking long short-term memory (LSTM) cells or gated recurrent units (GRUs) as part of a recurrent neural network (RNN) has become a standard approach to solving a number of tasks ranging from language modeling to text summarization.

Language Modelling Text Summarization

WaveletNet: Logarithmic Scale Efficient Convolutional Neural Networks for Edge Devices

no code implementations28 Nov 2018 Li Jing, Rumen Dangovski, Marin Soljacic

We present a logarithmic-scale efficient convolutional neural network architecture for edge devices, named WaveletNet.

General Classification

Migrating Knowledge between Physical Scenarios based on Artificial Neural Networks

no code implementations27 Aug 2018 Yurui Qu, Li Jing, Yichen Shen, Min Qiu, Marin Soljacic

First, we demonstrate that in predicting the transmission from multilayer photonic film, the relative error rate is reduced by 46. 8% (26. 5%) when the source data comes from 10-layer (8-layer) films and the target data comes from 8-layer (10-layer) films.

Multi-Task Learning

Rotational Unit of Memory

2 code implementations ICLR 2018 Rumen Dangovski, Li Jing, Marin Soljacic

We evaluate our model on synthetic memorization, question answering and language modeling tasks.

Ranked #5 on Question Answering on bAbi (Accuracy (trained on 1k) metric)

Language Modelling Machine Translation +4

Nanophotonic Particle Simulation and Inverse Design Using Artificial Neural Networks

1 code implementation18 Oct 2017 John Peurifoy, Yichen Shen, Li Jing, Yi Yang, Fidel Cano-Renteria, Brendan Delacy, Max Tegmark, John D. Joannopoulos, Marin Soljacic

We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles.

Computational Physics Applied Physics Optics

Gated Orthogonal Recurrent Units: On Learning to Forget

1 code implementation8 Jun 2017 Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljačić, Yoshua Bengio

We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory.

Ranked #7 on Question Answering on bAbi (Accuracy (trained on 1k) metric)

Denoising Question Answering

Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs

4 code implementations ICML 2017 Li Jing, Yichen Shen, Tena Dubček, John Peurifoy, Scott Skirlo, Yann Lecun, Max Tegmark, Marin Soljačić

Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data.

Permuted-MNIST

Cannot find the paper you are looking for? You can Submit a new open access paper.