Search Results for author: Andrey Gromov

Found 6 papers, 2 papers with code

Grokking modular arithmetic

1 code implementation6 Jan 2023 Andrey Gromov

We present a simple neural network that can learn modular arithmetic tasks and exhibits a sudden jump in generalization known as ``grokking''.

To grok or not to grok: Disentangling generalization and memorization on corrupted algorithmic datasets

1 code implementation19 Oct 2023 Darshil Doshi, Aritra Das, Tianyu He, Andrey Gromov

Robust generalization is a major challenge in deep learning, particularly when the number of trainable parameters is very large.

Memorization

Critical Initialization of Wide and Deep Neural Networks through Partial Jacobians: General Theory and Applications

no code implementations23 Nov 2021 Darshil Doshi, Tianyu He, Andrey Gromov

We derive recurrence relations for the norms of partial Jacobians and utilize these relations to analyze criticality of deep fully connected neural networks with LayerNorm and/or residual connections.

AutoInit: Automatic Initialization via Jacobian Tuning

no code implementations27 Jun 2022 Tianyu He, Darshil Doshi, Andrey Gromov

Good initialization is essential for training Deep Neural Networks (DNNs).

Bridging Associative Memory and Probabilistic Modeling

no code implementations15 Feb 2024 Rylan Schaeffer, Nika Zahedi, Mikail Khona, Dhruv Pai, Sang Truong, Yilun Du, Mitchell Ostrow, Sarthak Chandra, Andres Carranza, Ila Rani Fiete, Andrey Gromov, Sanmi Koyejo

Based on the observation that associative memory's energy functions can be seen as probabilistic modeling's negative log likelihoods, we build a bridge between the two that enables useful flow of ideas in both directions.

In-Context Learning

The Unreasonable Ineffectiveness of the Deeper Layers

no code implementations26 Mar 2024 Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, Daniel A. Roberts

We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction (up to half) of the layers are removed.

Quantization Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.