Memorization

269 papers with code • 1 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Memorization models and implementations

Most implemented papers

mixup: Beyond Empirical Risk Minimization

facebookresearch/mixup-cifar10 ICLR 2018

We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

Wide & Deep Learning for Recommender Systems

microsoft/recommenders 24 Jun 2016

Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort.

Neural Machine Translation in Linear Time

paarthneekhara/byteNet-tensorflow 31 Oct 2016

The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence.

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

bhanML/Co-teaching NeurIPS 2018

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Generalization through Memorization: Nearest Neighbor Language Models

urvashik/knnlm ICLR 2020

Applying this augmentation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our $k$NN-LM achieves a new state-of-the-art perplexity of 15. 79 - a 2. 9 point improvement with no additional training.

Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

openai/grok 6 Jan 2022

In this paper we propose to study generalization of neural networks on small algorithmically generated datasets.

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling

eleutherai/gpt-neox 3 Apr 2023

How do large language models (LLMs) develop and evolve over the course of training?

Associative Long Short-Term Memory

mohammadpz/Associative_LSTM 9 Feb 2016

We investigate a new method to augment recurrent neural networks with extra memory without increasing the number of network parameters.

How does Disagreement Help Generalization against Label Corruption?

xingruiyu/coteaching_plus 14 Jan 2019

Learning with noisy labels is one of the hottest problems in weakly-supervised learning.