no code implementations • 2 Apr 2024 • James Anibal, Hannah Huth, Ming Li, Lindsey Hazen, Yen Minh Lam, Nguyen Thi Thu Hang, Michael Kleinman, Shelley Ost, Christopher Jackson, Laura Sprabery, Cheran Elangovan, Balaji Krishnaiah, Lee Akst, Ioan Lina, Iqbal Elyazar, Lenny Ekwati, Stefan Jansen, Richard Nduwayezu, Charisse Garcia, Jeffrey Plum, Jacqueline Brenner, Miranda Song, Emily Ricotta, David Clifton, C. Louise Thwaites, Yael Bensoussan, Bradford Wood
This report introduces a consortium of partners for global work, presents the application used for data collection, and showcases the potential of informative voice EHR to advance the scalability and diversity of audio AI.
no code implementations • 23 Aug 2023 • Michael Kleinman, Alessandro Achille, Stefano Soatto
Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations.
1 code implementation • CVPR 2023 • Michael Kleinman, Alessandro Achille, Stefano Soatto
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
1 code implementation • NeurIPS 2023 • Michael Kleinman, Alessandro Achille, Stefano Soatto, Jonathan Kao
We propose a notion of common information that allows one to quantify and separate the information that is shared between two random variables from the information that is unique to each.
no code implementations • NeurIPS 2021 • Brandon McMahan, Michael Kleinman, Jonathan Kao
For relatively complex tasks, we find that attractor topology is invariant to the choice of learning rule, but representational geometry is not.
no code implementations • NeurIPS 2021 • Michael Kleinman, Chandramouli Chandrasekaran, Jonathan Kao
Recurrent neural networks (RNNs) trained on neuroscience-based tasks have been widely used as models for cortical areas performing analogous tasks.
no code implementations • ICLR Workshop Neural_Compression 2021 • Michael Kleinman, Alessandro Achille, Stefano Soatto, Jonathan Kao
We introduce the Redundant Information Neural Estimator (RINE), a method that allows efficient estimation for the component of information about a target variable that is common to a set of sources, previously referred to as the “redundant information.” We show that existing definitions of the redundant information can be recast in terms of an optimization over a family of deterministic or stochastic functions.
no code implementations • ICLR 2021 • Michael Kleinman, Alessandro Achille, Daksh Idnani, Jonathan C. Kao
We introduce a notion of usable information contained in the representation learned by a deep network, and use it to study how optimal representations for the task emerge during training.