1 code implementation • 4 Nov 2023 • Jan P. Engelmann, Alessandro Palma, Jakub M. Tomczak, Fabian J. Theis, Francesco Paolo Casale
Predicting patient features from single-cell data can help identify cellular states implicated in health and disease.
no code implementations • 3 Oct 2023 • Adam Izdebski, Ewelina Weglarz-Tomczak, Ewa Szczurek, Jakub M. Tomczak
To address this, we propose Joint Transformer that combines a Transformer decoder, Transformer encoder, and a predictor in a joint generative model with shared weights.
no code implementations • 27 Mar 2023 • Michał Zając, Kamil Deja, Anna Kuzina, Jakub M. Tomczak, Tomasz Trzciński, Florian Shkurti, Piotr Miłoś
Diffusion models have achieved remarkable success in generating high-quality images thanks to their novel training procedures applied to unprecedented amounts of data.
1 code implementation • 20 Feb 2023 • Anna Kuzina, Jakub M. Tomczak
Hierarchical Variational Autoencoders (VAEs) are among the most popular likelihood-based generative models.
1 code implementation • 31 Jan 2023 • Kamil Deja, Tomasz Trzcinski, Jakub M. Tomczak
Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train.
1 code implementation • 25 Jan 2023 • David M. Knigge, David W. Romero, Albert Gu, Efstratios Gavves, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn, Jan-Jakob Sonke
Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data.
1 code implementation • 7 Jun 2022 • David W. Romero, David M. Knigge, Albert Gu, Erik J. Bekkers, Efstratios Gavves, Jakub M. Tomczak, Mark Hoogendoorn
The use of Convolutional Neural Networks (CNNs) is widespread in Deep Learning due to a range of desirable model properties which result in an efficient and effective machine learning framework.
1 code implementation • 31 May 2022 • Kamil Deja, Anna Kuzina, Tomasz Trzciński, Jakub M. Tomczak
Their main strength comes from their unique setup in which a model (the backward diffusion process) is trained to reverse the forward diffusion process, which gradually adds noise to the input signal.
1 code implementation • 18 Mar 2022 • Anna Kuzina, Max Welling, Jakub M. Tomczak
Variational autoencoders (VAEs) are latent variable models that can generate complex objects and provide meaningful latent representations.
no code implementations • 18 Nov 2021 • Jie Luo, Aart Stuurman, Jakub M. Tomczak, Jacintha Ellers, Agoston E. Eiben
Simultaneously evolving morphologies (bodies) and controllers (brains) of robots can cause a mismatch between the inherited body and brain in the offspring.
1 code implementation • ICLR 2022 • David W. Romero, Robert-Jan Bruintjes, Jakub M. Tomczak, Erik J. Bekkers, Mark Hoogendoorn, Jan C. van Gemert
In this work, we propose FlexConv, a novel convolutional operation with which high bandwidth convolutional kernels of learnable kernel size can be learned at a fixed parameter cost.
no code implementations • 22 Sep 2021 • Justus F. Hübotter, Pablo Lanillos, Jakub M. Tomczak
In the experiments, we show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons and significantly decreases the reconstruction error of the spiking auto-encoder.
1 code implementation • NeurIPS 2021 • Emile van Krieken, Jakub M. Tomczak, Annette ten Teije
Stochastic AD extends AD to stochastic computation graphs with sampling steps, which arise when modelers handle the intractable expectations common in Reinforcement Learning and Variational Inference.
1 code implementation • 10 Mar 2021 • Anna Kuzina, Max Welling, Jakub M. Tomczak
In this work, we explore adversarial attacks on the Variational Autoencoders (VAE).
1 code implementation • NeurIPS 2021 • Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai
Furthermore, we propose a learnable weighted concatenation, which not only improves the model performance but also indicates the importance of the concatenated weighted representation.
1 code implementation • ICLR 2022 • David W. Romero, Anna Kuzina, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
Convolutional networks are unable to handle sequences of unknown size and their memory horizon must be defined a priori.
Ranked #5 on Sequential Image Classification on Sequential MNIST
1 code implementation • 30 Nov 2020 • Jakub M. Tomczak
In this paper, we present a new class of invertible transformations with an application to flow-based generative models.
1 code implementation • 19 Oct 2020 • Ilze Amanda Auzina, Jakub M. Tomczak
The obtained results indicate the high potential of the proposed framework and the superiority of the new Markov kernel.
no code implementations • 19 Oct 2020 • Gongjin Lan, Maarten van Hooft, Matteo De Carlo, Jakub M. Tomczak, A. E. Eiben
The challenge of robotic reproduction -- making of new robots by recombining two existing ones -- has been recently cracked and physically evolving robot systems have come within reach.
no code implementations • pproximateinference AABI Symposium 2021 • Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai
We introduce Invertible Dense Networks (i-DenseNets), a more parameter efficient alternative to Residual Flows.
1 code implementation • 5 Oct 2020 • Ioannis Gatopoulos, Jakub M. Tomczak
Density estimation, compression and data generation are crucial tasks in artificial intelligence.
1 code implementation • 19 Sep 2020 • Ewelina Weglarz-Tomczak, Jakub M. Tomczak, Agoston E. Eiben, Stanley Brul
Models in systems biology are mathematical descriptions of biological processes that are used to answer questions and gain a better understanding of biological phenomena.
1 code implementation • 9 Jun 2020 • Ioannis Gatopoulos, Maarten Stol, Jakub M. Tomczak
The framework of variational autoencoders (VAEs) provides a principled method for jointly learning latent-variable models and corresponding inference models.
Ranked #62 on Image Generation on CIFAR-10 (bits/dimension metric)
1 code implementation • 9 Jun 2020 • David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
In this work, we fill this gap by leveraging the symmetries inherent to time-series for the construction of equivariant neural network.
1 code implementation • NeurIPS 2020 • Emiel Hoogeboom, Victor Garcia Satorras, Jakub M. Tomczak, Max Welling
Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows.
no code implementations • 4 May 2020 • Gongjin Lan, Jakub M. Tomczak, Diederik M. Roijers, A. E. Eiben
Evolutionary Algorithms (EA) on the other hand rely on search heuristics that typically do not depend on all previous data and can be done in constant time.
1 code implementation • 4 May 2020 • Maximilian Ilse, Jakub M. Tomczak, Patrick Forré
We argue that causal concepts can be used to explain the success of data augmentation by describing how they can weaken the spurious correlation between the observed domains and the task labels.
1 code implementation • 7 Feb 2020 • Jakub M. Tomczak, Ewelina Weglarz-Tomczak, Agoston E. Eiben
Differential evolution (DE) is a well-known type of evolutionary algorithms (EA).
1 code implementation • ICML 2020 • David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e. g., relative positions and poses).
no code implementations • pproximateinference AABI Symposium 2021 • Emiel Hoogeboom, Taco S. Cohen, Jakub M. Tomczak
Media is generally stored digitally and is therefore discrete.
no code implementations • 21 Jan 2020 • Gongjin Lan, Matteo De Carlo, Fuda van Diggelen, Jakub M. Tomczak, Diederik M. Roijers, A. E. Eiben
We generalize the well-studied problem of gait learning in modular robots in two dimensions.
no code implementations • 7 Oct 2019 • Tim R. Davidson, Jakub M. Tomczak, Efstratios Gavves
Learning suitable latent representations for observed, high-dimensional data is an important research topic underlying many recent advances in machine learning.
no code implementations • ICCV 2019 • Amirhossein Habibian, Ties van Rozendaal, Jakub M. Tomczak, Taco S. Cohen
We employ a model that consists of a 3D autoencoder with a discrete latent space and an autoregressive prior used for entropy coding.
3 code implementations • 24 May 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
no code implementations • 26 Apr 2019 • Jakub M. Tomczak, Romain Lepert, Auke Wiggers
Optimizing the execution time of tensor program, e. g., a convolution, involves finding its optimal configuration.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
1 code implementation • NeurIPS 2019 • Changyong Oh, Jakub M. Tomczak, Efstratios Gavves, Max Welling
On this combinatorial graph, we propose an ARD diffusion kernel with which the GP is able to model high-order interactions between variables leading to better performance.
no code implementations • 26 Jun 2018 • Philip Botros, Jakub M. Tomczak
Decision making is a process that is extremely prone to different biases.
9 code implementations • 3 Apr 2018 • Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, Jakub M. Tomczak
But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure.
Ranked #6 on Link Prediction on Cora
2 code implementations • 15 Mar 2018 • Rianne van den Berg, Leonard Hasenclever, Jakub M. Tomczak, Max Welling
Variational inference relies on flexible approximate posterior distributions.
17 code implementations • ICML 2018 • Maximilian Ilse, Jakub M. Tomczak, Max Welling
Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances.
Ranked #7 on Aerial Scene Classification on UCM (50% as trainset)
no code implementations • 1 Dec 2017 • Jakub M. Tomczak, Maximilian Ilse, Max Welling
The computer-aided analysis of medical scans is a longstanding goal in the medical imaging field.
1 code implementation • 7 Jun 2017 • Jakub M. Tomczak, Max Welling
In this paper, we propose a new volume-preserving flow and show that it performs similarly to the linear general normalizing flow.
6 code implementations • 19 May 2017 • Jakub M. Tomczak, Max Welling
In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, or VampPrior for short.
2 code implementations • 29 Nov 2016 • Jakub M. Tomczak, Max Welling
One fashion of enriching the variational posterior distribution is application of normalizing flows, i. e., a series of invertible transformations to latent variables with a simple posterior.
no code implementations • 23 Oct 2016 • Adam Gonczarek, Jakub M. Tomczak, Szymon Zaręba, Joanna Kaczmar, Piotr Dąbrowski, Michał J. Walczak
We introduce a deep learning architecture for structure-based virtual screening that generates fixed-sized fingerprints of proteins and small molecules by applying learnable atom convolution and softmax operations to each compound separately.
no code implementations • 16 Jul 2014 • Jakub M. Tomczak, Adam Gonczarek
The subspace Restricted Boltzmann Machine (subspaceRBM) is a third-order Boltzmann machine where multiplicative interactions are between one visible and two hidden units.
no code implementations • 28 Aug 2013 • Jakub M. Tomczak
In this paper, we apply Classification Restricted Boltzmann Machine (ClassRBM) to the problem of predicting breast cancer recurrence.
no code implementations • 30 Jul 2012 • Jakub M. Tomczak, Jerzy Swiatek, Krzysztof Latawiec
In this paper, we present the Gaussian process regression as the predictive model for Quality-of-Service (QoS) attributes in Web service systems.