Search Results for author: Alexander Meulemans

Found 8 papers, 6 papers with code

Structured Entity Extraction Using Large Language Models

no code implementations6 Feb 2024 Haolun Wu, Ye Yuan, Liana Mikaelyan, Alexander Meulemans, Xue Liu, James Hensman, Bhaskar Mitra

Recent advances in machine learning have significantly impacted the field of information extraction, with Large Language Models (LLMs) playing a pivotal role in extracting structured information from unstructured text.

The least-control principle for local learning at equilibrium

1 code implementation4 Jul 2022 Alexander Meulemans, Nicolas Zucchet, Seijin Kobayashi, Johannes von Oswald, João Sacramento

As special cases, they include models of great current interest in both neuroscience and machine learning, such as deep neural networks, equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.

BIG-bench Machine Learning Meta-Learning

Minimizing Control for Credit Assignment with Strong Feedback

2 code implementations14 Apr 2022 Alexander Meulemans, Matilde Tristany Farinha, Maria R. Cervera, João Sacramento, Benjamin F. Grewe

Building upon deep feedback control (DFC), a recently proposed credit assignment method, we combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.

Credit Assignment in Neural Networks through Deep Feedback Control

3 code implementations NeurIPS 2021 Alexander Meulemans, Matilde Tristany Farinha, Javier García Ordóñez, Pau Vilimelis Aceituno, João Sacramento, Benjamin F. Grewe

The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output.

Challenges for Using Impact Regularizers to Avoid Negative Side Effects

no code implementations29 Jan 2021 David Lindner, Kyle Matoba, Alexander Meulemans

Finally, we explore promising directions to overcome the unsolved challenges in preventing negative side effects with impact regularizers.

reinforcement-learning Reinforcement Learning (RL)

Neural networks with late-phase weights

2 code implementations ICLR 2021 Johannes von Oswald, Seijin Kobayashi, Alexander Meulemans, Christian Henning, Benjamin F. Grewe, João Sacramento

The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD).

Ranked #70 on Image Classification on CIFAR-100 (using extra training data)

Image Classification

A Theoretical Framework for Target Propagation

2 code implementations NeurIPS 2020 Alexander Meulemans, Francesco S. Carzaniga, Johan A. K. Suykens, João Sacramento, Benjamin F. Grewe

Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.