Search Results for author: Rina Panigrahy

Found 19 papers, 1 papers with code

Simple Mechanisms for Representing, Indexing and Manipulating Concepts

no code implementations18 Oct 2023 Yuanzhi Li, Raghu Meka, Rina Panigrahy, Kulin Shah

Deep networks typically learn concepts via classifiers, which involves setting up a model and training it via gradient descent to fit the concept-labeled data.

The Power of External Memory in Increasing Predictive Model Capacity

no code implementations31 Jan 2023 Cenk Baykal, Dylan J Cutler, Nishanth Dikkala, Nikhil Ghosh, Rina Panigrahy, Xin Wang

One way of introducing sparsity into deep networks is by attaching an external table of parameters that is sparsely looked up at different layers of the network.

Language Modelling

A Theoretical View on Sparsely Activated Networks

no code implementations8 Aug 2022 Cenk Baykal, Nishanth Dikkala, Rina Panigrahy, Cyrus Rashtchian, Xin Wang

After representing LSH-based sparse networks with our model, we prove that sparse networks can match the approximation power of dense networks on Lipschitz functions.

Provable hierarchical lifelong learning with a sketch-based modular architecture

no code implementations29 Sep 2021 Rina Panigrahy, Brendan Juba, Zihao Deng, Xin Wang, Zee Fryer

We propose a modular architecture for lifelong learning of hierarchically structured tasks.

For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions

1 code implementation11 Mar 2021 Nishanth Dikkala, Gal Kaplun, Rina Panigrahy

We provide theoretical and empirical evidence that neural representations can be viewed as LSH-like functions that map each input to an embedding that is a function of solely the informative $\gamma$ and invariant to $\theta$, effectively recovering the manifold identifier $\gamma$.

One-Shot Learning

Learning the gravitational force law and other analytic functions

no code implementations15 May 2020 Atish Agarwala, Abhimanyu Das, Rina Panigrahy, Qiuyi Zhang

We present experimental evidence that the many-body gravitational force function is easier to learn with ReLU networks as compared to networks with exponential activations.

How does the Mind store Information?

no code implementations3 Oct 2019 Rina Panigrahy

How we store information in our mind has been a major intriguing open question.

Open-Ended Question Answering

Recursive Sketches for Modular Deep Learning

no code implementations29 May 2019 Badih Ghazi, Rina Panigrahy, Joshua R. Wang

The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs.

On the Learnability of Deep Random Networks

no code implementations8 Apr 2019 Abhimanyu Das, Sreenivas Gollapudi, Ravi Kumar, Rina Panigrahy

In this paper we study the learnability of deep random networks from both theoretical and practical points of view.

Algorithms for $\ell_p$ Low-Rank Approximation

no code implementations ICML 2017 Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, David P. Woodruff

We consider the problem of approximating a given matrix by a low-rank matrix so as to minimize the entrywise $\ell_p$-approximation error, for any $p \geq 1$; the case $p = 2$ is the classical SVD problem.

Convergence Results for Neural Networks via Electrodynamics

no code implementations1 Feb 2017 Rina Panigrahy, Sushant Sachdeva, Qiuyi Zhang

Iterating, we show that gradient descent can be used to learn the entire network one node at a time.

Sparse Matrix Factorization

no code implementations13 Nov 2013 Behnam Neyshabur, Rina Panigrahy

We investigate the problem of factorizing a matrix into several sparse matrices and propose an algorithm for this under randomness and sparsity assumptions.

Dictionary Learning

A Differential Equations Approach to Optimizing Regret Trade-offs

no code implementations7 May 2013 Alexandr Andoni, Rina Panigrahy

To obtain our main result, we show that the optimal payoff functions have to satisfy the Hermite differential equation, and hence are given by the solutions to this equation.

Optimal amortized regret in every interval

no code implementations29 Apr 2013 Rina Panigrahy, Preyas Popat

In this paper we show a randomized algorithm that in an amortized sense gets a regret of $O(\sqrt x)$ for any interval when the sequence is partitioned into intervals arbitrarily.

Fractal structures in Adversarial Prediction

no code implementations29 Apr 2013 Rina Panigrahy, Preyas Popat

In this work we study how "fractal-like" processes arise in a prediction game where an adversary is generating a sequence of bits and an algorithm is trying to predict them.

Time Series Time Series Analysis

Prediction strategies without loss

no code implementations NeurIPS 2011 Michael Kapralov, Rina Panigrahy

Moreover, for {\em any window of size $n$} the regret of our algorithm to any expert never exceeds $O(\sqrt{n(\log N+\log T)})$, where $N$ is the number of experts and $T$ is the time horizon, while maintaining the essentially zero loss property.

Cannot find the paper you are looking for? You can Submit a new open access paper.