Search Results for author: Jason W. Rocks

Found 6 papers, 1 papers with code

A universal niche geometry governs the response of ecosystems to environmental perturbations

no code implementations2 Mar 2024 Akshit Goyal, Jason W. Rocks, Pankaj Mehta

How ecosystems respond to environmental perturbations is a fundamental question in ecology, made especially challenging due to the strong coupling between species and their environment.

Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle

1 code implementation24 Mar 2023 Rylan Schaeffer, Mikail Khona, Zachary Robertson, Akhilan Boopathy, Kateryna Pistunova, Jason W. Rocks, Ila Rani Fiete, Oluwasanmi Koyejo

Double descent is a surprising phenomenon in machine learning, in which as the number of model parameters grows relative to the number of data, test error drops as models grow ever larger into the highly overparameterized (data undersampled) regime.

Learning Theory regression

Emergent competition shapes the ecological properties of multi-trophic ecosystems

no code implementations6 Mar 2023 Zhijie Feng, Robert Marsland III, Jason W. Rocks, Pankaj Mehta

Ecosystems are commonly organized into trophic levels -- organisms that occupy the same level in a food chain (e. g., plants, herbivores, carnivores).

Bias-variance decomposition of overparameterized regression with random linear features

no code implementations10 Mar 2022 Jason W. Rocks, Pankaj Mehta

We show that the linear random features model exhibits three phase transitions: two different transitions to an interpolation regime where the training error is zero, along with an additional transition between regimes with large bias and minimal bias.

regression

The Geometry of Over-parameterized Regression and Adversarial Perturbations

no code implementations25 Mar 2021 Jason W. Rocks, Pankaj Mehta

Classical regression has a simple geometric description in terms of a projection of the training labels onto the column space of the design matrix.

regression

Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models

no code implementations26 Oct 2020 Jason W. Rocks, Pankaj Mehta

In both models, increasing the number of fit parameters leads to a phase transition where the training error goes to zero and the test error diverges as a result of the variance (while the bias remains finite).

Cannot find the paper you are looking for? You can Submit a new open access paper.