Search Results for author: Maciej Wołczyk

Found 16 papers, 7 papers with code

Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem

no code implementations5 Feb 2024 Maciej Wołczyk, Bartłomiej Cupiał, Mateusz Ostaszewski, Michał Bortkiewicz, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

Fine-tuning is a widespread technique that allows practitioners to transfer pre-trained capabilities, as recently showcased by the successful applications of foundation models.

Montezuma's Revenge NetHack +2

Disentangling Transfer in Continual Reinforcement Learning

no code implementations28 Sep 2022 Maciej Wołczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

The ability of continual learning systems to transfer knowledge from previously seen tasks in order to maximize performance on new tasks is a significant challenge for the field, limiting the applicability of continual learning solutions to realistic scenarios.

Continual Learning Continuous Control +2

Hebbian Continual Representation Learning

no code implementations28 Jun 2022 Paweł Morawiecki, Andrii Krutsylo, Maciej Wołczyk, Marek Śmieja

Although this setting is natural for biological systems, it proves very difficult for machine learning models such as artificial neural networks.

BIG-bench Machine Learning Class Incremental Learning +2

Continual Learning with Guarantees via Weight Interval Constraints

1 code implementation16 Jun 2022 Maciej Wołczyk, Karol J. Piczak, Bartosz Wójcik, Łukasz Pustelnik, Paweł Morawiecki, Jacek Tabor, Tomasz Trzciński, Przemysław Spurek

We introduce a new training paradigm that enforces interval constraints on neural network parameter space to control forgetting.

Continual Learning

On the relationship between disentanglement and multi-task learning

no code implementations7 Oct 2021 Łukasz Maziarka, Aleksandra Nowak, Maciej Wołczyk, Andrzej Bedychaj

One of the main arguments behind studying disentangled representations is the assumption that they can be easily reused in different tasks.

Disentanglement Multi-Task Learning

SafetyNet: Safe planning for real-world self-driving vehicles using machine-learned policies

no code implementations28 Sep 2021 Matt Vitelli, Yan Chang, Yawei Ye, Maciej Wołczyk, Błażej Osiński, Moritz Niendorf, Hugo Grimmett, Qiangui Huang, Ashesh Jain, Peter Ondruska

To combat this, our approach uses a simple yet effective rule-based fallback layer that performs sanity checks on an ML planner's decisions (e. g. avoiding collision, assuring physical feasibility).

Imitation Learning

Urban Driver: Learning to Drive from Real-world Demonstrations Using Policy Gradients

no code implementations27 Sep 2021 Oliver Scheel, Luca Bergamini, Maciej Wołczyk, Błażej Osiński, Peter Ondruska

In this work we are the first to present an offline policy gradient method for learning imitative policies for complex urban driving from a large corpus of real-world demonstrations.

PluGeN: Multi-Label Conditional Generation From Pre-Trained Models

1 code implementation18 Sep 2021 Maciej Wołczyk, Magdalena Proszewska, Łukasz Maziarka, Maciej Zięba, Patryk Wielopolski, Rafał Kurczab, Marek Śmieja

Modern generative models achieve excellent quality in a variety of tasks including image or text generation and chemical molecule modeling.

Attribute Text Generation

Zero Time Waste: Recycling Predictions in Early Exit Neural Networks

1 code implementation NeurIPS 2021 Maciej Wołczyk, Bartosz Wójcik, Klaudia Bałazy, Igor Podolak, Jacek Tabor, Marek Śmieja, Tomasz Trzciński

The problem of reducing processing time of large deep learning models is a fundamental challenge in many real-world applications.

Continual World: A Robotic Benchmark For Continual Reinforcement Learning

1 code implementation NeurIPS 2021 Maciej Wołczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

Continual learning (CL) -- the ability to continuously learn, building on previously acquired knowledge -- is a natural requirement for long-lived autonomous reinforcement learning (RL) agents.

Continual Learning reinforcement-learning +1

Finding the Optimal Network Depth in Classification Tasks

1 code implementation17 Apr 2020 Bartosz Wójcik, Maciej Wołczyk, Klaudia Bałazy, Jacek Tabor

We develop a fast end-to-end method for training lightweight neural networks using multiple classifier heads.

Classification General Classification

Biologically-Inspired Spatial Neural Networks

no code implementations NeurIPS Workshop Neuro_AI 2019 Maciej Wołczyk, Jacek Tabor, Marek Śmieja, Szymon Maszke

We introduce bio-inspired artificial neural networks consisting of neurons that are additionally characterized by spatial positions.

Continual Learning

SeGMA: Semi-Supervised Gaussian Mixture Auto-Encoder

no code implementations21 Jun 2019 Marek Śmieja, Maciej Wołczyk, Jacek Tabor, Bernhard C. Geiger

We propose a semi-supervised generative model, SeGMA, which learns a joint probability distribution of data and their classes and which is implemented in a typical Wasserstein auto-encoder framework.

Style Transfer

Hypernetwork functional image representation

no code implementations27 Feb 2019 Sylwester Klocek, Łukasz Maziarka, Maciej Wołczyk, Jacek Tabor, Jakub Nowak, Marek Śmieja

Motivated by the human way of memorizing images we introduce their functional representation, where an image is represented by a neural network.

Image Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.