Search Results for author: Michał Bortkiewicz

Found 5 papers, 1 papers with code

Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem

no code implementations5 Feb 2024 Maciej Wołczyk, Bartłomiej Cupiał, Mateusz Ostaszewski, Michał Bortkiewicz, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

Fine-tuning is a widespread technique that allows practitioners to transfer pre-trained capabilities, as recently showcased by the successful applications of foundation models.

Montezuma's Revenge NetHack +2

Emergency action termination for immediate reaction in hierarchical reinforcement learning

no code implementations11 Nov 2022 Michał Bortkiewicz, Jakub Łyskawa, Paweł Wawrzyński, Mateusz Ostaszewski, Artur Grudkowski, Tomasz Trzciński

In this paper, we address this gap in the state-of-the-art approaches and propose a method in which the validity of higher-level actions (thus lower-level goals) is constantly verified at the higher level.

Hierarchical Reinforcement Learning reinforcement-learning +1

Progressive Latent Replay for efficient Generative Rehearsal

no code implementations4 Jul 2022 Stanisław Pawlak, Filip Szatkowski, Michał Bortkiewicz, Jan Dubiński, Tomasz Trzciński

We introduce a new method for internal replay that modulates the frequency of rehearsal based on the depth of the network.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.