Search Results for author: Jakub M. Tomczak

Found 49 papers, 30 papers with code

De Novo Drug Design with Joint Transformers

no code implementations3 Oct 2023 Adam Izdebski, Ewelina Weglarz-Tomczak, Ewa Szczurek, Jakub M. Tomczak

To address this, we propose Joint Transformer that combines a Transformer decoder, Transformer encoder, and a predictor in a joint generative model with shared weights.

Exploring Continual Learning of Diffusion Models

no code implementations27 Mar 2023 Michał Zając, Kamil Deja, Anna Kuzina, Jakub M. Tomczak, Tomasz Trzciński, Florian Shkurti, Piotr Miłoś

Diffusion models have achieved remarkable success in generating high-quality images thanks to their novel training procedures applied to unprecedented amounts of data.

Benchmarking Continual Learning +1

Discouraging posterior collapse in hierarchical Variational Autoencoders using context

1 code implementation20 Feb 2023 Anna Kuzina, Jakub M. Tomczak

Hierarchical Variational Autoencoders (VAEs) are among the most popular likelihood-based generative models.

Learning Data Representations with Joint Diffusion Models

1 code implementation31 Jan 2023 Kamil Deja, Tomasz Trzcinski, Jakub M. Tomczak

Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train.

counterfactual Domain Adaptation

Modelling Long Range Dependencies in $N$D: From Task-Specific to a General Purpose CNN

1 code implementation25 Jan 2023 David M. Knigge, David W. Romero, Albert Gu, Efstratios Gavves, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn, Jan-Jakob Sonke

Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data.

Towards a General Purpose CNN for Long Range Dependencies in $N$D

1 code implementation7 Jun 2022 David W. Romero, David M. Knigge, Albert Gu, Erik J. Bekkers, Efstratios Gavves, Jakub M. Tomczak, Mark Hoogendoorn

The use of Convolutional Neural Networks (CNNs) is widespread in Deep Learning due to a range of desirable model properties which result in an efficient and effective machine learning framework.

On Analyzing Generative and Denoising Capabilities of Diffusion-based Deep Generative Models

1 code implementation31 May 2022 Kamil Deja, Anna Kuzina, Tomasz Trzciński, Jakub M. Tomczak

Their main strength comes from their unique setup in which a model (the backward diffusion process) is trained to reverse the forward diffusion process, which gradually adds noise to the input signal.

Denoising

Alleviating Adversarial Attacks on Variational Autoencoders with MCMC

1 code implementation18 Mar 2022 Anna Kuzina, Max Welling, Jakub M. Tomczak

Variational autoencoders (VAEs) are latent variable models that can generate complex objects and provide meaningful latent representations.

Adversarial Attack

The Effects of Learning in Morphologically Evolving Robot Systems

no code implementations18 Nov 2021 Jie Luo, Aart Stuurman, Jakub M. Tomczak, Jacintha Ellers, Agoston E. Eiben

Simultaneously evolving morphologies (bodies) and controllers (brains) of robots can cause a mismatch between the inherited body and brain in the offspring.

Training Deep Spiking Auto-encoders without Bursting or Dying Neurons through Regularization

no code implementations22 Sep 2021 Justus F. Hübotter, Pablo Lanillos, Jakub M. Tomczak

In the experiments, we show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons and significantly decreases the reconstruction error of the spiking auto-encoder.

Image Reconstruction

Storchastic: A Framework for General Stochastic Automatic Differentiation

1 code implementation NeurIPS 2021 Emile van Krieken, Jakub M. Tomczak, Annette ten Teije

Stochastic AD extends AD to stochastic computation graphs with sampling steps, which arise when modelers handle the intractable expectations common in Reinforcement Learning and Variational Inference.

Variational Inference

Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks

1 code implementation10 Mar 2021 Anna Kuzina, Max Welling, Jakub M. Tomczak

In this work, we explore adversarial attacks on the Variational Autoencoders (VAE).

Invertible DenseNets with Concatenated LipSwish

1 code implementation NeurIPS 2021 Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai

Furthermore, we propose a learnable weighted concatenation, which not only improves the model performance but also indicates the importance of the concatenated weighted representation.

Density Estimation

General Invertible Transformations for Flow-based Generative Modeling

1 code implementation30 Nov 2020 Jakub M. Tomczak

In this paper, we present a new class of invertible transformations with an application to flow-based generative models.

ABC-Di: Approximate Bayesian Computation for Discrete Data

1 code implementation19 Oct 2020 Ilze Amanda Auzina, Jakub M. Tomczak

The obtained results indicate the high potential of the proposed framework and the superiority of the new Markov kernel.

Neural Architecture Search valid

Learning Locomotion Skills in Evolvable Robots

no code implementations19 Oct 2020 Gongjin Lan, Maarten van Hooft, Matteo De Carlo, Jakub M. Tomczak, A. E. Eiben

The challenge of robotic reproduction -- making of new robots by recombining two existing ones -- has been recently cracked and physically evolving robot systems have come within reach.

Invertible DenseNets

no code implementations pproximateinference AABI Symposium 2021 Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai

We introduce Invertible Dense Networks (i-DenseNets), a more parameter efficient alternative to Residual Flows.

Self-Supervised Variational Auto-Encoders

1 code implementation5 Oct 2020 Ioannis Gatopoulos, Jakub M. Tomczak

Density estimation, compression and data generation are crucial tasks in artificial intelligence.

Data Compression Density Estimation +1

Population-based Optimization for Kinetic Parameter Identification in Glycolytic Pathway in Saccharomyces cerevisiae

1 code implementation19 Sep 2020 Ewelina Weglarz-Tomczak, Jakub M. Tomczak, Agoston E. Eiben, Stanley Brul

Models in systems biology are mathematical descriptions of biological processes that are used to answer questions and gain a better understanding of biological phenomena.

Super-resolution Variational Auto-Encoders

1 code implementation9 Jun 2020 Ioannis Gatopoulos, Maarten Stol, Jakub M. Tomczak

The framework of variational autoencoders (VAEs) provides a principled method for jointly learning latent-variable models and corresponding inference models.

Ranked #62 on Image Generation on CIFAR-10 (bits/dimension metric)

Image Generation Super-Resolution

Wavelet Networks: Scale-Translation Equivariant Learning From Raw Time-Series

1 code implementation9 Jun 2020 David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn

In this work, we fill this gap by leveraging the symmetries inherent to time-series for the construction of equivariant neural network.

Descriptive Time Series +2

The Convolution Exponential and Generalized Sylvester Flows

1 code implementation NeurIPS 2020 Emiel Hoogeboom, Victor Garcia Satorras, Jakub M. Tomczak, Max Welling

Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows.

Time Efficiency in Optimization with a Bayesian-Evolutionary Algorithm

no code implementations4 May 2020 Gongjin Lan, Jakub M. Tomczak, Diederik M. Roijers, A. E. Eiben

Evolutionary Algorithms (EA) on the other hand rely on search heuristics that typically do not depend on all previous data and can be done in constant time.

Bayesian Optimization Evolutionary Algorithms

Selecting Data Augmentation for Simulating Interventions

1 code implementation4 May 2020 Maximilian Ilse, Jakub M. Tomczak, Patrick Forré

We argue that causal concepts can be used to explain the success of data augmentation by describing how they can weaken the spurious correlation between the observed domains and the task labels.

Data Augmentation Domain Generalization

Attentive Group Equivariant Convolutional Networks

1 code implementation ICML 2020 David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn

Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e. g., relative positions and poses).

Increasing Expressivity of a Hyperspherical VAE

no code implementations7 Oct 2019 Tim R. Davidson, Jakub M. Tomczak, Efstratios Gavves

Learning suitable latent representations for observed, high-dimensional data is an important research topic underlying many recent advances in machine learning.

Video Compression With Rate-Distortion Autoencoders

no code implementations ICCV 2019 Amirhossein Habibian, Ties van Rozendaal, Jakub M. Tomczak, Taco S. Cohen

We employ a model that consists of a 3D autoencoder with a discrete latent space and an autoregressive prior used for entropy coding.

Motion Compensation Video Compression

DIVA: Domain Invariant Variational Autoencoders

3 code implementations24 May 2019 Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling

We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.

Domain Generalization Rotated MNIST

Simulating Execution Time of Tensor Programs using Graph Neural Networks

no code implementations26 Apr 2019 Jakub M. Tomczak, Romain Lepert, Auke Wiggers

Optimizing the execution time of tensor program, e. g., a convolution, involves finding its optimal configuration.

DIVA: Domain Invariant Variational Autoencoder

no code implementations ICLR Workshop DeepGenStruct 2019 Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling

We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.

Domain Generalization Rotated MNIST

Combinatorial Bayesian Optimization using the Graph Cartesian Product

1 code implementation NeurIPS 2019 Changyong Oh, Jakub M. Tomczak, Efstratios Gavves, Max Welling

On this combinatorial graph, we propose an ARD diffusion kernel with which the GP is able to model high-order interactions between variables leading to better performance.

Bayesian Optimization Neural Architecture Search +1

Hyperspherical Variational Auto-Encoders

9 code implementations3 Apr 2018 Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, Jakub M. Tomczak

But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure.

Link Prediction

Improving Variational Auto-Encoders using convex combination linear Inverse Autoregressive Flow

1 code implementation7 Jun 2017 Jakub M. Tomczak, Max Welling

In this paper, we propose a new volume-preserving flow and show that it performs similarly to the linear general normalizing flow.

VAE with a VampPrior

6 code implementations19 May 2017 Jakub M. Tomczak, Max Welling

In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, or VampPrior for short.

Improving Variational Auto-Encoders using Householder Flow

2 code implementations29 Nov 2016 Jakub M. Tomczak, Max Welling

One fashion of enriching the variational posterior distribution is application of normalizing flows, i. e., a series of invertible transformations to latent variables with a simple posterior.

Computational Efficiency

Learning Deep Architectures for Interaction Prediction in Structure-based Virtual Screening

no code implementations23 Oct 2016 Adam Gonczarek, Jakub M. Tomczak, Szymon Zaręba, Joanna Kaczmar, Piotr Dąbrowski, Michał J. Walczak

We introduce a deep learning architecture for structure-based virtual screening that generates fixed-sized fingerprints of proteins and small molecules by applying learnable atom convolution and softmax operations to each compound separately.

BIG-bench Machine Learning

Subspace Restricted Boltzmann Machine

no code implementations16 Jul 2014 Jakub M. Tomczak, Adam Gonczarek

The subspace Restricted Boltzmann Machine (subspaceRBM) is a third-order Boltzmann machine where multiplicative interactions are between one visible and two hidden units.

General Classification

Prediction of breast cancer recurrence using Classification Restricted Boltzmann Machine with Dropping

no code implementations28 Aug 2013 Jakub M. Tomczak

In this paper, we apply Classification Restricted Boltzmann Machine (ClassRBM) to the problem of predicting breast cancer recurrence.

General Classification

Gaussian process regression as a predictive model for Quality-of-Service in Web service systems

no code implementations30 Jul 2012 Jakub M. Tomczak, Jerzy Swiatek, Krzysztof Latawiec

In this paper, we present the Gaussian process regression as the predictive model for Quality-of-Service (QoS) attributes in Web service systems.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.