Search Results for author: Paweł Wawrzyński

Found 14 papers, 4 papers with code

Graph Vertex Embeddings: Distance, Regularization and Community Detection

no code implementations9 Apr 2024 Radosław Nowak, Adam Małkowski, Daniel Cieślak, Piotr Sokół, Paweł Wawrzyński

Graph embeddings have emerged as a powerful tool for representing complex network structures in a low-dimensional space, enabling the use of efficient methods that employ the metric structure in the embedding space as a proxy for the topological structure of the data.

Community Detection

Emergency action termination for immediate reaction in hierarchical reinforcement learning

no code implementations11 Nov 2022 Michał Bortkiewicz, Jakub Łyskawa, Paweł Wawrzyński, Mateusz Ostaszewski, Artur Grudkowski, Tomasz Trzciński

In this paper, we address this gap in the state-of-the-art approaches and propose a method in which the validity of higher-level actions (thus lower-level goals) is constantly verified at the higher level.

Hierarchical Reinforcement Learning reinforcement-learning +1

Reinforcement learning with experience replay and adaptation of action dispersion

no code implementations30 Jul 2022 Paweł Wawrzyński, Wojciech Masarczyk, Mateusz Ostaszewski

To that end, the dispersion should be tuned to assure a sufficiently high probability (densities) of the actions in the replay buffer and the modes of the distributions that generated them, yet this dispersion should not be higher.

reinforcement-learning Reinforcement Learning (RL)

ReGAE: Graph autoencoder based on recursive neural networks

no code implementations28 Jan 2022 Adam Małkowski, Jakub Grzechociński, Paweł Wawrzyński

In this paper we address the above challenge with recursive neural networks - the encoder and the decoder.

Logarithmic Continual Learning

no code implementations17 Jan 2022 Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński

Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.

Continual Learning

Multiband VAE: Latent Space Alignment for Knowledge Consolidation in Continual Learning

1 code implementation23 Jun 2021 Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński

We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.

Continual Learning Disentanglement +1

Reinforcement Learning for on-line Sequence Transformation

no code implementations28 May 2021 Grzegorz Rypeść, Łukasz Lepak, Paweł Wawrzyński

A number of problems in the processing of sound and natural language, as well as in other areas, can be reduced to simultaneously reading an input sequence and writing an output sequence of generally different length.

Machine Translation reinforcement-learning +2

Least Redundant Gated Recurrent Neural Network

1 code implementation28 May 2021 Łukasz Neumann, Łukasz Lepak, Paweł Wawrzyński

It is based on updating the previous memory state with a deep transformation of the lagged state and the network input.

ACERAC: Efficient reinforcement learning in fine time discretization

no code implementations8 Apr 2021 Jakub Łyskawa, Paweł Wawrzyński

It is not feasible because it causes the controlled system to jerk, and does not ensure sufficient exploration since a~single action is not long enough to create a~significant experience that could be translated into policy improvement.

reinforcement-learning Reinforcement Learning (RL)

BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning

1 code implementation25 Nov 2020 Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński

We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.

Continual Learning

A framework for reinforcement learning with autocorrelated actions

1 code implementation10 Sep 2020 Marcin Szulc, Jakub Łyskawa, Paweł Wawrzyński

Consequently, an agent learns from experiments that are distributed over time and potentially give better clues to policy improvement.

reinforcement-learning Reinforcement Learning (RL)

DCT-Conv: Coding filters in convolutional networks with Discrete Cosine Transform

no code implementations23 Jan 2020 Karol Chęciński, Paweł Wawrzyński

We follow the line of research in which filters of convolutional neural layers are determined on the basis of a smaller number of trained parameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.