Search Results for author: Roman Pogodin

Found 9 papers, 8 papers with code

Practical Kernel Tests of Conditional Independence

1 code implementation20 Feb 2024 Roman Pogodin, Antonin Schrab, Yazhe Li, Danica J. Sutherland, Arthur Gretton

We describe a data-efficient, kernel-based approach to statistical testing of conditional independence.

Synaptic Weight Distributions Depend on the Geometry of Plasticity

1 code implementation30 May 2023 Roman Pogodin, Jonathan Cornford, Arna Ghosh, Gauthier Gidel, Guillaume Lajoie, Blake Richards

Overall, our work shows that the current paradigm in theoretical work on synaptic plasticity that assumes Euclidean synaptic geometry may be misguided and that it should be possible to experimentally determine the true geometry of synaptic plasticity in the brain.

Efficient Conditionally Invariant Representation Learning

1 code implementation16 Dec 2022 Roman Pogodin, Namrata Deka, Yazhe Li, Danica J. Sutherland, Victor Veitch, Arthur Gretton

The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance.

Fairness regression +1

Towards Biologically Plausible Convolutional Networks

1 code implementation NeurIPS 2021 Roman Pogodin, Yash Mehta, Timothy P. Lillicrap, Peter E. Latham

This requires the network to pause occasionally for a sleep-like phase of "weight sharing".

Self-Supervised Learning with Kernel Dependence Maximization

1 code implementation NeurIPS 2021 Yazhe Li, Roman Pogodin, Danica J. Sutherland, Arthur Gretton

We approach self-supervised learning of image representations from a statistical dependence perspective, proposing Self-Supervised Learning with the Hilbert-Schmidt Independence Criterion (SSL-HSIC).

Depth Estimation Object Recognition +2

Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

1 code implementation NeurIPS 2020 Roman Pogodin, Peter E. Latham

The state-of-the art machine learning approach to training deep neural networks, backpropagation, is implausible for real neural networks: neurons need to know their outgoing weights; training alternates between a bottom-up forward pass (computation) and a top-down backward pass (learning); and the algorithm often needs precise labels of many data points.

Image Classification

Working memory facilitates reward-modulated Hebbian learning in recurrent neural networks

1 code implementation NeurIPS Workshop Neuro_AI 2019 Roman Pogodin, Dane Corneil, Alexander Seeholzer, Joseph Heng, Wulfram Gerstner

Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.

Temporal Sequences

On First-Order Bounds, Variance and Gap-Dependent Bounds for Adversarial Bandits

no code implementations19 Mar 2019 Roman Pogodin, Tor Lattimore

Finally, we study bounds that depend on the degree of separation of the arms, generalising the results by Cowan and Katehakis [2015] from the stochastic setting to the adversarial and improving the result of Seldin and Slivkins [2014] by a factor of log(n)/log(log(n)).

Efficient Rank Minimization to Tighten Semidefinite Programming for Unconstrained Binary Quadratic Optimization

1 code implementation5 Aug 2017 Roman Pogodin, Mikhail Krechetov, Yury Maximov

We propose a method for low-rank semidefinite programming in application to the semidefinite relaxation of unconstrained binary quadratic problems.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.