Search Results for author: Louis Kirsch

Found 19 papers, 8 papers with code

Language Agents as Optimizable Graphs

1 code implementation26 Feb 2024 Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, Jürgen Schmidhuber

Various human-designed prompt engineering techniques have been proposed to improve problem solvers based on Large Language Models (LLMs), yielding many disparate code bases.

Prompt Engineering

Discovering Temporally-Aware Reinforcement Learning Algorithms

1 code implementation8 Feb 2024 Matthew Thomas Jackson, Chris Lu, Louis Kirsch, Robert Tjarko Lange, Shimon Whiteson, Jakob Nicolaus Foerster

We propose a simple augmentation to two existing objective discovery approaches that allows the discovered algorithm to dynamically update its objective function throughout the agent's training procedure, resulting in expressive schedules and increased generalization across different training horizons.

Meta-Learning reinforcement-learning

Brain-inspired learning in artificial neural networks: a review

no code implementations18 May 2023 Samuel Schmidgall, Jascha Achterberg, Thomas Miconi, Louis Kirsch, Rojin Ziaei, S. Pardis Hajiseyedrazi, Jason Eshraghian

Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, and robotics.

Learning One Abstract Bit at a Time Through Self-Invented Experiments Encoded as Neural Networks

no code implementations29 Dec 2022 Vincent Herrmann, Louis Kirsch, Jürgen Schmidhuber

There are two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions.

Eliminating Meta Optimization Through Self-Referential Meta Learning

no code implementations29 Dec 2022 Louis Kirsch, Jürgen Schmidhuber

We discuss the relationship of such systems to in-context and memory-based meta learning and show that self-referential neural networks require functionality to be reused in the form of parameter sharing.

Meta-Learning

General-Purpose In-Context Learning by Meta-Learning Transformers

no code implementations8 Dec 2022 Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, Luke Metz

We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count.

In-Context Learning Inductive Bias +1

Exploring through Random Curiosity with General Value Functions

1 code implementation18 Nov 2022 Aditya Ramesh, Louis Kirsch, Sjoerd van Steenkiste, Jürgen Schmidhuber

Furthermore, RC-GVF significantly outperforms previous methods in the absence of ground-truth episodic counts in the partially observable MiniGrid environments.

Efficient Exploration

The Benefits of Model-Based Generalization in Reinforcement Learning

1 code implementation4 Nov 2022 Kenny Young, Aditya Ramesh, Louis Kirsch, Jürgen Schmidhuber

First, we provide a simple theorem motivating how learning a model as an intermediate step can narrow down the set of possible value functions more than learning a value function directly from data using the Bellman equation.

Model-based Reinforcement Learning reinforcement-learning +1

Introducing Symmetries to Black Box Meta Reinforcement Learning

no code implementations22 Sep 2021 Louis Kirsch, Sebastian Flennerhag, Hado van Hasselt, Abram Friesen, Junhyuk Oh, Yutian Chen

We show that a recent successful meta RL approach that meta-learns an objective for backpropagation-based learning exhibits certain symmetries (specifically the reuse of the learning rule, and invariance to input and output permutations) that are not present in typical black-box meta RL systems.

Meta-Learning Meta Reinforcement Learning +2

Meta Learning Backpropagation And Improving It

no code implementations NeurIPS 2021 Louis Kirsch, Jürgen Schmidhuber

Many concepts have been proposed for meta learning with neural networks (NNs), e. g., NNs that learn to reprogram fast weights, Hebbian plasticity, learned learning rules, and meta recurrent NNs.

Meta-Learning

Parameter-Based Value Functions

1 code implementation ICLR 2021 Francesco Faccio, Louis Kirsch, Jürgen Schmidhuber

We introduce a class of value functions called Parameter-Based Value Functions (PBVFs) whose inputs include the policy parameters.

Continuous Control Reinforcement Learning (RL)

Gaussian Mean Field Regularizes by Limiting Learned Information

no code implementations12 Feb 2019 Julius Kunze, Louis Kirsch, Hippolyt Ritter, David Barber

Variational inference with a factorized Gaussian posterior estimate is a widely used approach for learning parameters and hidden variables.

Variational Inference

Noisy Information Bottlenecks for Generalization

no code implementations27 Sep 2018 Julius Kunze, Louis Kirsch, Hippolyt Ritter, David Barber

We propose Noisy Information Bottlenecks (NIB) to limit mutual information between learned parameters and the data through noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.