no code implementations • 22 Nov 2023 • Daniel Marczak, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski
In the field of continual learning, models are designed to learn tasks one after the other.
no code implementations • 23 Aug 2023 • Daniel Marczak, Grzegorz Rypeść, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski
However, these settings are not well aligned with real-life scenarios, where a learning agent has access to a vast amount of unlabeled data encompassing both novel (entirely unlabeled) classes and examples from known classes.
no code implementations • 17 Jan 2022 • Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński
Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.
1 code implementation • 23 Jun 2021 • Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński
We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.
1 code implementation • 25 Nov 2020 • Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński
We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.