1 code implementation • 20 Feb 2024 • Roman Pogodin, Antonin Schrab, Yazhe Li, Danica J. Sutherland, Arthur Gretton
We describe a data-efficient, kernel-based approach to statistical testing of conditional independence.
1 code implementation • 30 May 2023 • Roman Pogodin, Jonathan Cornford, Arna Ghosh, Gauthier Gidel, Guillaume Lajoie, Blake Richards
Overall, our work shows that the current paradigm in theoretical work on synaptic plasticity that assumes Euclidean synaptic geometry may be misguided and that it should be possible to experimentally determine the true geometry of synaptic plasticity in the brain.
1 code implementation • 16 Dec 2022 • Roman Pogodin, Namrata Deka, Yazhe Li, Danica J. Sutherland, Victor Veitch, Arthur Gretton
The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance.
1 code implementation • NeurIPS 2021 • Roman Pogodin, Yash Mehta, Timothy P. Lillicrap, Peter E. Latham
This requires the network to pause occasionally for a sleep-like phase of "weight sharing".
1 code implementation • NeurIPS 2021 • Yazhe Li, Roman Pogodin, Danica J. Sutherland, Arthur Gretton
We approach self-supervised learning of image representations from a statistical dependence perspective, proposing Self-Supervised Learning with the Hilbert-Schmidt Independence Criterion (SSL-HSIC).
1 code implementation • NeurIPS 2020 • Roman Pogodin, Peter E. Latham
The state-of-the art machine learning approach to training deep neural networks, backpropagation, is implausible for real neural networks: neurons need to know their outgoing weights; training alternates between a bottom-up forward pass (computation) and a top-down backward pass (learning); and the algorithm often needs precise labels of many data points.
1 code implementation • NeurIPS Workshop Neuro_AI 2019 • Roman Pogodin, Dane Corneil, Alexander Seeholzer, Joseph Heng, Wulfram Gerstner
Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.
no code implementations • 19 Mar 2019 • Roman Pogodin, Tor Lattimore
Finally, we study bounds that depend on the degree of separation of the arms, generalising the results by Cowan and Katehakis [2015] from the stochastic setting to the adversarial and improving the result of Seldin and Slivkins [2014] by a factor of log(n)/log(log(n)).
1 code implementation • 5 Aug 2017 • Roman Pogodin, Mikhail Krechetov, Yury Maximov
We propose a method for low-rank semidefinite programming in application to the semidefinite relaxation of unconstrained binary quadratic problems.
Optimization and Control