Search Results for author: Tomasz Trzciński

Found 45 papers, 23 papers with code

The Tunnel Effect: Building Data Representations in Deep Neural Networks

no code implementations31 May 2023 Wojciech Masarczyk, Mateusz Ostaszewski, Ehsan Imani, Razvan Pascanu, Piotr Miłoś, Tomasz Trzciński

Deep neural networks are widely known for their remarkable effectiveness across various tasks, with the consensus that deeper networks implicitly learn more complex data representations.

Continual Learning Image Classification +1

BlendFields: Few-Shot Example-Driven Facial Modeling

no code implementations CVPR 2023 Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski

Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance.

Exploring Continual Learning of Diffusion Models

no code implementations27 Mar 2023 Michał Zając, Kamil Deja, Anna Kuzina, Jakub M. Tomczak, Tomasz Trzciński, Florian Shkurti, Piotr Miłoś

Diffusion models have achieved remarkable success in generating high-quality images thanks to their novel training procedures applied to unprecedented amounts of data.

Benchmarking Continual Learning +1

Active Visual Exploration Based on Attention-Map Entropy

no code implementations11 Mar 2023 Adam Pardyl, Grzegorz Rypeść, Grzegorz Kurzejamski, Bartosz Zieliński, Tomasz Trzciński

Active visual exploration addresses the issue of limited sensor capabilities in real-world scenarios, where successive observations are actively chosen based on the environment.

Hypernetworks build Implicit Neural Representations of Sounds

no code implementations9 Feb 2023 Filip Szatkowski, Karol J. Piczak, Przemysław Spurek, Jacek Tabor, Tomasz Trzciński

Implicit Neural Representations (INRs) are nowadays used to represent multimedia signals across various real-life applications, including image super-resolution, image compression, or 3D rendering.

Image Compression Image Super-Resolution +1

Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason?

no code implementations20 Dec 2022 Monika Wysoczańska, Tom Monnier, Tomasz Trzciński, David Picard

Recent advances in visual representation learning allowed to build an abundance of powerful off-the-shelf features that are ready-to-use for numerous downstream tasks.

Question Answering Representation Learning +2

Emergency action termination for immediate reaction in hierarchical reinforcement learning

no code implementations11 Nov 2022 Michał Bortkiewicz, Jakub Łyskawa, Paweł Wawrzyński, Mateusz Ostaszewski, Artur Grudkowski, Tomasz Trzciński

In this paper, we address this gap in the state-of-the-art approaches and propose a method in which the validity of higher-level actions (thus lower-level goals) is constantly verified at the higher level.

Hierarchical Reinforcement Learning reinforcement-learning +1

HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks

no code implementations3 Nov 2022 Filip Szatkowski, Karol J. Piczak, Przemysław Spurek, Jacek Tabor, Tomasz Trzciński

Implicit neural representations (INRs) are a rapidly growing research field, which provides alternative ways to represent multimedia signals.

Image Super-Resolution Meta-Learning

Selectively increasing the diversity of GAN-generated samples

no code implementations4 Jul 2022 Jan Dubiński, Kamil Deja, Sandro Wenzel, Przemysław Rokita, Tomasz Trzciński

Especially prone to mode collapse are conditional GANs, which tend to ignore the input noise vector and focus on the conditional information.

Progressive Latent Replay for efficient Generative Rehearsal

no code implementations4 Jul 2022 Stanisław Pawlak, Filip Szatkowski, Michał Bortkiewicz, Jan Dubiński, Tomasz Trzciński

We introduce a new method for internal replay that modulates the frequency of rehearsal based on the depth of the network.

Continual Learning

Continual Learning with Guarantees via Weight Interval Constraints

1 code implementation16 Jun 2022 Maciej Wołczyk, Karol J. Piczak, Bartosz Wójcik, Łukasz Pustelnik, Paweł Morawiecki, Jacek Tabor, Tomasz Trzciński, Przemysław Spurek

We introduce a new training paradigm that enforces interval constraints on neural network parameter space to control forgetting.

Continual Learning

On Analyzing Generative and Denoising Capabilities of Diffusion-based Deep Generative Models

1 code implementation31 May 2022 Kamil Deja, Anna Kuzina, Tomasz Trzciński, Jakub M. Tomczak

Their main strength comes from their unique setup in which a model (the backward diffusion process) is trained to reverse the forward diffusion process, which gradually adds noise to the input signal.


Deep Learning Fetal Ultrasound Video Model Match Human Observers in Biometric Measurements

1 code implementation27 May 2022 Szymon Płotka, Adam Klasa, Aneta Lisowska, Joanna Seliga-Siwecka, Michał Lipa, Tomasz Trzciński, Arkadiusz Sitek

We found that automated fetal biometric measurements obtained by FUVAI were comparable to the measurements performed by experienced sonographers The observed differences in measurement values were within the range of inter- and intra-observer variability.

BabyNet: Residual Transformer Module for Birth Weight Prediction on Fetal Ultrasound Video

1 code implementation19 May 2022 Szymon Płotka, Michal K. Grzeszczyk, Robert Brawura-Biskupski-Samaha, Paweł Gutaj, Michał Lipa, Tomasz Trzciński, Arkadiusz Sitek

Predicting fetal weight at birth is an important aspect of perinatal care, particularly in the context of antenatal management, which includes the planned timing and the mode of delivery.


Continual learning on 3D point clouds with random compressed rehearsal

no code implementations16 May 2022 Maciej Zamorski, Michał Stypułkowski, Konrad Karanowski, Tomasz Trzciński, Maciej Zięba

By using rehearsal and reconstruction as regularization methods of the learning process, our approach achieves a significant decrease of catastrophic forgetting compared to the existing solutions on several most popular point cloud datasets considering two continual learning settings: when a task is known beforehand, and in the challenging scenario of when task information is unknown to the model.

Continual Learning Visual Reasoning

POTHER: Patch-Voted Deep Learning-Based Chest X-ray Bias Analysis for COVID-19 Detection

1 code implementation23 Jan 2022 Tomasz Szczepański, Arkadiusz Sitek, Tomasz Trzciński, Szymon Płotka

We show that our proposed method is more robust than previous attempts to counter confounding factors such as ECG leads in chest X-rays that often influence model classification decisions.

Explainable artificial intelligence

Logarithmic Continual Learning

no code implementations17 Jan 2022 Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński

Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.

Continual Learning

CoNeRF: Controllable Neural Radiance Fields

1 code implementation CVPR 2022 Kacper Kania, Kwang Moo Yi, Marek Kowalski, Tomasz Trzciński, Andrea Tagliasacchi

We extend neural 3D representations to allow for intuitive and interpretable user control beyond novel view rendering (i. e. camera control).

3D Face Modelling 3D Reconstruction +1

HyperCube: Implicit Field Representations of Voxelized 3D Models

1 code implementation12 Oct 2021 Magdalena Proszewska, Marcin Mazur, Tomasz Trzciński, Przemysław Spurek

Recently introduced implicit field representations offer an effective way of generating 3D object shapes.

Efficient GPU implementation of randomized SVD and its applications

no code implementations5 Oct 2021 Łukasz Struski, Paweł Morkisz, Przemysław Spurek, Samuel Rodriguez Bernabeu, Tomasz Trzciński

In this work, we leverage efficient processing operations that can be run in parallel on modern Graphical Processing Units (GPUs), predominant computing architecture used e. g. in deep learning, to reduce the computational burden of computing matrix decompositions.

Data Compression Dimensionality Reduction

On robustness of generative representations against catastrophic forgetting

no code implementations4 Sep 2021 Wojciech Masarczyk, Kamil Deja, Tomasz Trzciński

Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks.

Continual Learning Specificity

FetalNet: Multi-task Deep Learning Framework for Fetal Ultrasound Biometric Measurements

1 code implementation14 Jul 2021 Szymon Płotka, Tomasz Włodarczyk, Adam Klasa, Michał Lipa, Arkadiusz Sitek, Tomasz Trzciński

The main goal in fetal ultrasound scan video analysis is to find proper standard planes to measure the fetal head, abdomen and femur.

Multiband VAE: Latent Space Alignment for Knowledge Consolidation in Continual Learning

1 code implementation23 Jun 2021 Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński

We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.

Continual Learning Disentanglement +1

Visual Probing: Cognitive Framework for Explaining Self-Supervised Image Representations

1 code implementation21 Jun 2021 Witold Oleszkiewicz, Dominika Basaj, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Koryna Lewandowska, Tomasz Trzciński, Bartosz Zieliński

Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models by leveraging probing tasks employed previously in natural language processing.

Representation Learning

Zero Time Waste: Recycling Predictions in Early Exit Neural Networks

1 code implementation NeurIPS 2021 Maciej Wołczyk, Bartosz Wójcik, Klaudia Bałazy, Igor Podolak, Jacek Tabor, Marek Śmieja, Tomasz Trzciński

The problem of reducing processing time of large deep learning models is a fundamental challenge in many real-world applications.

Convolutional Neural Networks in Orthodontics: a review

no code implementations18 Apr 2021 Szymon Płotka, Tomasz Włodarczyk, Ryszard Szczerba, Przemysław Rokita, Patrycja Bartkowska, Oskar Komisarek, Artur Matthews-Brzozowski, Tomasz Trzciński

Convolutional neural networks (CNNs) are used in many areas of computer vision, such as object tracking and recognition, security, military, and biomedical image analysis.

Object Tracking

TrajeVAE: Controllable Human Motion Generation from Trajectories

no code implementations1 Apr 2021 Kacper Kania, Marek Kowalski, Tomasz Trzciński

The creation of plausible and controllable 3D human motion animations is a long-standing problem that requires a manual intervention of skilled artists.

Pose Prediction

BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning

1 code implementation25 Nov 2020 Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński

We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.

Continual Learning

Representing Point Clouds with Generative Conditional Invertible Flow Networks

1 code implementation7 Oct 2020 Michał Stypułkowski, Kacper Kania, Maciej Zamorski, Maciej Zięba, Tomasz Trzciński, Jan Chorowski

To exploit similarities between same-class objects and to improve model performance, we turn to weight sharing: networks that model densities of points belonging to objects in the same family share all parameters with the exception of a small, object-specific embedding vector.

Point Cloud Registration

Spontaneous preterm birth prediction using convolutional neural networks

no code implementations16 Aug 2020 Tomasz Włodarczyk, Szymon Płotka, Przemysław Rokita, Nicole Sochacki-Wójcicka, Jakub Wójcicki, Michał Lipa, Tomasz Trzciński

Based on the conducted results and model efficiency, we decided to extend U-Net by adding a parallel branch for classification task.

HyperFlow: Representing 3D Objects as Surfaces

1 code implementation15 Jun 2020 Przemysław Spurek, Maciej Zięba, Jacek Tabor, Tomasz Trzciński

To that end, we devise a generative model that uses a hypernetwork to return the weights of a Continuous Normalizing Flows (CNF) target network.

Autonomous Driving Quantization

End-to-end Sinkhorn Autoencoder with Noise Generator

1 code implementation11 Jun 2020 Kamil Deja, Jan Dubiński, Piotr Nowak, Sandro Wenzel, Tomasz Trzciński

To address these shortcomings, we introduce a novel method dubbed end-to-end Sinkhorn Autoencoder, that leverages sinkhorn algorithm to explicitly align distribution of encoded real data examples and generated noise.


Understanding the robustness of deep neural network classifiers for breast cancer screening

no code implementations23 Mar 2020 Witold Oleszkiewicz, Taro Makino, Stanisław Jastrzębski, Tomasz Trzciński, Linda Moy, Kyunghyun Cho, Laura Heacock, Krzysztof J. Geras

Deep neural networks (DNNs) show promise in breast cancer screening, but their robustness to input perturbations must be better understood before they can be clinically implemented.

Hypernetwork approach to generating point clouds

2 code implementations ICML 2020 Przemysław Spurek, Sebastian Winczowski, Jacek Tabor, Maciej Zamorski, Maciej Zięba, Tomasz Trzciński

The main idea of our HyperCloud method is to build a hyper network that returns weights of a particular neural network (target network) trained to map points from a uniform unit ball distribution into a 3D shape.

Generating 3D Point Clouds

Estimation of preterm birth markers with U-Net segmentation network

no code implementations24 Aug 2019 Tomasz Włodarczyk, Szymon Płotka, Tomasz Trzciński, Przemysław Rokita, Nicole Sochacki-Wójcicka, Michał Lipa, Jakub Wójcicki

To achieve this goal, we propose to first use a deep neural network architecture for segmenting prenatal ultrasound images and then automatically extract two biophysical ultrasound markers, cervical length (CL) and anterior cervical angle (ACA), from the resulting images.

Neural Comic Style Transfer: Case Study

no code implementations5 Sep 2018 Maciej Pęśko, Tomasz Trzciński

The work by Gatys et al. [1] recently showed a neural style algorithm that can produce an image in the style of another image.

Style Transfer

Speaker Diarization using Deep Recurrent Convolutional Neural Networks for Speaker Embeddings

no code implementations9 Aug 2017 Pawel Cyrta, Tomasz Trzciński, Wojciech Stokowiec

In this paper we propose a new method of speaker diarization that employs a deep learning architecture to learn speaker embeddings.

speaker-diarization Speaker Diarization

What Looks Good with my Sofa: Multimodal Search Engine for Interior Design

1 code implementation21 Jul 2017 Ivona Tautkute, Aleksandra Możejko, Wojciech Stokowiec, Tomasz Trzciński, Łukasz Brocki, Krzysztof Marasek

In this paper, we propose a multi-modal search engine for interior design that combines visual and textual queries.

Cannot find the paper you are looking for? You can Submit a new open access paper.