no code implementations • 27 Nov 2024 • Patrick Mineault, Niccolò Zanichelli, Joanne Zichen Peng, Anton Arkhipov, Eli Bingham, Julian Jara-Ettinger, Emily Mackevicius, Adam Marblestone, Marcelo Mattar, Andrew Payne, Sophia Sanborn, Karen Schroeder, Zenna Tavares, Andreas Tolias
As AI systems become increasingly powerful, the need for safe AI has become more pressing.
1 code implementation • 12 Jul 2024 • Sophia Sanborn, Johan Mathe, Mathilde Papillon, Domas Buracas, Hansen J Lillemark, Christian Shewmake, Abby Bertics, Xavier Pennec, Nina Miolane
Echoing the 19th-century revolutions that gave rise to non-Euclidean geometry, an emerging line of research is redefining modern machine learning with non-Euclidean structures.
no code implementations • 10 Jul 2024 • Simon Mataigne, Johan Mathe, Sophia Sanborn, Christopher Hillar, Nina Miolane
An important problem in signal processing and deep learning is to achieve \textit{invariance} to nuisance factors not relevant for the task.
1 code implementation • 13 Dec 2023 • Giovanni Luca Marchetti, Christopher Hillar, Danica Kragic, Sophia Sanborn
In this work, we formally prove that, under certain conditions, if a neural network is invariant to a finite group then its weights recover the Fourier transform on that group.
2 code implementations • 30 Nov 2023 • Carlos G. Correa, Sophia Sanborn, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths
Human behavior is often assumed to be hierarchically structured, made up of abstract actions that can be decomposed into concrete actions.
2 code implementations • NeurIPS 2023 • Sophia Sanborn, Nina Miolane
We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks ($G$-CNNs), which we call the $G$-triple-correlation ($G$-TC) layer.
no code implementations • 17 Oct 2023 • David Klindt, Sophia Sanborn, Francisco Acosta, Frédéric Poitevin, Nina Miolane
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features.
1 code implementation • 26 Sep 2023 • Mathilde Papillon, Mustafa Hajij, Helen Jenne, Johan Mathe, Audun Myers, Theodore Papamarkou, Tolga Birdal, Tamal Dey, Tim Doster, Tegan Emerson, Gurusankar Gopalakrishnan, Devendra Govil, Aldo Guzmán-Sáenz, Henry Kvinge, Neal Livesay, Soham Mukherjee, Shreyas N. Samaga, Karthikeyan Natesan Ramamurthy, Maneel Reddy Karri, Paul Rosen, Sophia Sanborn, Robin Walters, Jens Agerberg, Sadrodin Barikbin, Claudio Battiloro, Gleb Bazhenov, Guillermo Bernardez, Aiden Brent, Sergio Escalera, Simone Fiorellino, Dmitrii Gavrilev, Mohammed Hassanin, Paul Häusner, Odin Hoff Gardaa, Abdelwahed Khamis, Manuel Lecha, German Magai, Tatiana Malygina, Rubén Ballester, Kalyan Nadimpalli, Alexander Nikitin, Abraham Rabinowitz, Alessandro Salatiello, Simone Scardapane, Luca Scofano, Suraj Singh, Jens Sjölund, Pavel Snopov, Indro Spinelli, Lev Telyatnikov, Lucia Testa, Maosheng Yang, Yixiao Yue, Olga Zaghen, Ali Zia, Nina Miolane
This paper presents the computational challenge on topological deep learning that was hosted within the ICML 2023 Workshop on Topology and Geometry in Machine Learning.
4 code implementations • 20 Apr 2023 • Mathilde Papillon, Sophia Sanborn, Mustafa Hajij, Nina Miolane
The natural world is full of complex systems characterized by intricate relations between their components: from social interactions between individuals in a social network to electrostatic interactions between atoms in a protein.
1 code implementation • 20 Dec 2022 • Francisco Acosta, Sophia Sanborn, Khanh Dao Duc, Manu Madhav, Nina Miolane
The neural manifold hypothesis postulates that the activity of a neural population forms a low-dimensional manifold whose structure reflects that of the encoded task variables.
1 code implementation • 7 Sep 2022 • Sophia Sanborn, Christian Shewmake, Bruno Olshausen, Christopher Hillar
We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on the space over which a signal is defined.
no code implementations • 5 Nov 2021 • Garrick Orchard, E. Paxon Frady, Daniel Ben Dayan Rubin, Sophia Sanborn, Sumit Bam Shrestha, Friedrich T. Sommer, Mike Davies
The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables -- very different from the stateless neuron models used in deep learning.
no code implementations • 25 Sep 2019 • Sophia Sanborn, Michael Chang, Sergey Levine, Thomas Griffiths
Many approaches to hierarchical reinforcement learning aim to identify sub-goal structure in tasks.
Hierarchical Reinforcement Learning reinforcement-learning +1
no code implementations • 18 Jul 2018 • Sophia Sanborn, David D. Bourgin, Michael Chang, Thomas L. Griffiths
The importance of hierarchically structured representations for tractable planning has long been acknowledged.