Search Results for author: Michał Zając

Found 10 papers, 4 papers with code

Improved GQ-CNN: Deep Learning Model for Planning Robust Grasps

no code implementations16 Feb 2018 Maciej Jaśkowski, Jakub Świątkowski, Michał Zając, Maciej Klimek, Jarek Potiuk, Piotr Rybicki, Piotr Polatowski, Przemysław Walczyk, Kacper Nowicki, Marek Cygan

In this work we improve on one of the most promising approaches, the Grasp Quality Convolutional Neural Network (GQ-CNN) trained on the DexNet 2. 0 dataset.

Split Batch Normalization: Improving Semi-Supervised Learning under Domain Shift

no code implementations ICLR Workshop LLD 2019 Michał Zając, Konrad Żołna, Stanisław Jastrzębski

Recent work has shown that using unlabeled data in semi-supervised learning is not always beneficial and can even hurt generalization, especially when there is a class mismatch between the unlabeled and labeled examples.

Image Classification

Google Research Football: A Novel Reinforcement Learning Environment

1 code implementation25 Jul 2019 Karol Kurach, Anton Raichuk, Piotr Stańczyk, Michał Zając, Olivier Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, Sylvain Gelly

Recent progress in the field of reinforcement learning has been accelerated by virtual learning environments such as video games, where novel algorithms and ideas can be quickly tested in a safe and reproducible manner.

Game of Football reinforcement-learning +1

Continual World: A Robotic Benchmark For Continual Reinforcement Learning

1 code implementation NeurIPS 2021 Maciej Wołczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

Continual learning (CL) -- the ability to continuously learn, building on previously acquired knowledge -- is a natural requirement for long-lived autonomous reinforcement learning (RL) agents.

Continual Learning reinforcement-learning +1

Disentangling Transfer in Continual Reinforcement Learning

no code implementations28 Sep 2022 Maciej Wołczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

The ability of continual learning systems to transfer knowledge from previously seen tasks in order to maximize performance on new tasks is a significant challenge for the field, limiting the applicability of continual learning solutions to realistic scenarios.

Continual Learning Continuous Control +2

Trust Your $\nabla$: Gradient-based Intervention Targeting for Causal Discovery

no code implementations NeurIPS 2023 Mateusz Olko, Michał Zając, Aleksandra Nowak, Nino Scherrer, Yashas Annadani, Stefan Bauer, Łukasz Kuciński, Piotr Miłoś

In this work, we propose a novel Gradient-based Intervention Targeting method, abbreviated GIT, that 'trusts' the gradient estimator of a gradient-based causal discovery framework to provide signals for the intervention acquisition function.

Causal Discovery Experimental Design

Exploring Continual Learning of Diffusion Models

no code implementations27 Mar 2023 Michał Zając, Kamil Deja, Anna Kuzina, Jakub M. Tomczak, Tomasz Trzciński, Florian Shkurti, Piotr Miłoś

Diffusion models have achieved remarkable success in generating high-quality images thanks to their novel training procedures applied to unprecedented amounts of data.

Benchmarking Continual Learning +1

Prediction Error-based Classification for Class-Incremental Learning

1 code implementation30 May 2023 Michał Zając, Tinne Tuytelaars, Gido M. van de Ven

Class-incremental learning (CIL) is a particularly challenging variant of continual learning, where the goal is to learn to discriminate between all classes presented in an incremental fashion.

Classification Class Incremental Learning +1

Exploiting Novel GPT-4 APIs

1 code implementation21 Dec 2023 Kellin Pelrine, Mohammad Taufeeque, Michał Zając, Euan McLean, Adam Gleave

Language model attacks typically assume one of two extreme threat models: full white-box access to model weights, or black-box access limited to a text generation API.

Language Modelling Retrieval +1

Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem

no code implementations5 Feb 2024 Maciej Wołczyk, Bartłomiej Cupiał, Mateusz Ostaszewski, Michał Bortkiewicz, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

Fine-tuning is a widespread technique that allows practitioners to transfer pre-trained capabilities, as recently showcased by the successful applications of foundation models.

Montezuma's Revenge NetHack +2

Cannot find the paper you are looking for? You can Submit a new open access paper.