Search Results for author: Michael Kleinman

Found 8 papers, 2 papers with code

Critical Learning Periods Emerge Even in Deep Linear Networks

no code implementations23 Aug 2023 Michael Kleinman, Alessandro Achille, Stefano Soatto

Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations.

Multi-Task Learning

Critical Learning Periods for Multisensory Integration in Deep Networks

1 code implementation CVPR 2023 Michael Kleinman, Alessandro Achille, Stefano Soatto

We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.

Gacs-Korner Common Information Variational Autoencoder

1 code implementation NeurIPS 2023 Michael Kleinman, Alessandro Achille, Stefano Soatto, Jonathan Kao

We propose a notion of common information that allows one to quantify and separate the information that is shared between two random variables from the information that is unique to each.

Learning rule influences recurrent network representations but not attractor structure in decision-making tasks

no code implementations NeurIPS 2021 Brandon McMahan, Michael Kleinman, Jonathan Kao

For relatively complex tasks, we find that attractor topology is invariant to the choice of learning rule, but representational geometry is not.

Decision Making

A mechanistic multi-area recurrent network model of decision-making

no code implementations NeurIPS 2021 Michael Kleinman, Chandramouli Chandrasekaran, Jonathan Kao

Recurrent neural networks (RNNs) trained on neuroscience-based tasks have been widely used as models for cortical areas performing analogous tasks.

Decision Making

Redundant Information Neural Estimation

no code implementations ICLR Workshop Neural_Compression 2021 Michael Kleinman, Alessandro Achille, Stefano Soatto, Jonathan Kao

We introduce the Redundant Information Neural Estimator (RINE), a method that allows efficient estimation for the component of information about a target variable that is common to a set of sources, previously referred to as the “redundant information.” We show that existing definitions of the redundant information can be recast in terms of an optimization over a family of deterministic or stochastic functions.

Image Classification

Usable Information and Evolution of Optimal Representations During Training

no code implementations ICLR 2021 Michael Kleinman, Alessandro Achille, Daksh Idnani, Jonathan C. Kao

We introduce a notion of usable information contained in the representation learned by a deep network, and use it to study how optimal representations for the task emerge during training.

Decision Making Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.