no code implementations • 12 Mar 2024 • Filip Szatkowski, Fei Yang, Bartłomiej Twardowski, Tomasz Trzciński, Joost Van de Weijer
We assess the accuracy and computational cost of various continual learning techniques enhanced with early-exits and TLC across standard class-incremental learning benchmarks such as 10 split CIFAR100 and ImageNetSubset and show that TLC can achieve the accuracy of the standard methods using less than 70\% of their computations.
no code implementations • 6 Oct 2023 • Filip Szatkowski, Bartosz Wójcik, Mikołaj Piórczyński, Kamil Adamczewski
Transformer models, despite their impressive performance, often face practical limitations due to their high computational requirements.
1 code implementation • 18 Aug 2023 • Filip Szatkowski, Mateusz Pyla, Marcin Przewięźlikowski, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński
In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting.
1 code implementation • 9 Feb 2023 • Filip Szatkowski, Karol J. Piczak, Przemysław Spurek, Jacek Tabor, Tomasz Trzciński
Implicit Neural Representations (INRs) are nowadays used to represent multimedia signals across various real-life applications, including image super-resolution, image compression, or 3D rendering.
no code implementations • 3 Nov 2022 • Filip Szatkowski, Karol J. Piczak, Przemysław Spurek, Jacek Tabor, Tomasz Trzciński
Implicit neural representations (INRs) are a rapidly growing research field, which provides alternative ways to represent multimedia signals.
no code implementations • 4 Jul 2022 • Stanisław Pawlak, Filip Szatkowski, Michał Bortkiewicz, Jan Dubiński, Tomasz Trzciński
We introduce a new method for internal replay that modulates the frequency of rehearsal based on the depth of the network.