Search Results for author: Filip Szatkowski

Found 6 papers, 2 papers with code

Accelerated Inference and Reduced Forgetting: The Dual Benefits of Early-Exit Networks in Continual Learning

no code implementations12 Mar 2024 Filip Szatkowski, Fei Yang, Bartłomiej Twardowski, Tomasz Trzciński, Joost Van de Weijer

We assess the accuracy and computational cost of various continual learning techniques enhanced with early-exits and TLC across standard class-incremental learning benchmarks such as 10 split CIFAR100 and ImageNetSubset and show that TLC can achieve the accuracy of the standard methods using less than 70\% of their computations.

Class Incremental Learning Incremental Learning

SADMoE: Exploiting Activation Sparsity with Dynamic-k Gating

no code implementations6 Oct 2023 Filip Szatkowski, Bartosz Wójcik, Mikołaj Piórczyński, Kamil Adamczewski

Transformer models, despite their impressive performance, often face practical limitations due to their high computational requirements.

Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning

1 code implementation18 Aug 2023 Filip Szatkowski, Mateusz Pyla, Marcin Przewięźlikowski, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński

In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting.

Class Incremental Learning Incremental Learning +2

Hypernetworks build Implicit Neural Representations of Sounds

1 code implementation9 Feb 2023 Filip Szatkowski, Karol J. Piczak, Przemysław Spurek, Jacek Tabor, Tomasz Trzciński

Implicit Neural Representations (INRs) are nowadays used to represent multimedia signals across various real-life applications, including image super-resolution, image compression, or 3D rendering.

Image Compression Image Super-Resolution +1

HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks

no code implementations3 Nov 2022 Filip Szatkowski, Karol J. Piczak, Przemysław Spurek, Jacek Tabor, Tomasz Trzciński

Implicit neural representations (INRs) are a rapidly growing research field, which provides alternative ways to represent multimedia signals.

Image Super-Resolution Meta-Learning

Progressive Latent Replay for efficient Generative Rehearsal

no code implementations4 Jul 2022 Stanisław Pawlak, Filip Szatkowski, Michał Bortkiewicz, Jan Dubiński, Tomasz Trzciński

We introduce a new method for internal replay that modulates the frequency of rehearsal based on the depth of the network.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.