Search Results for author: Petra Poklukar

Found 12 papers, 5 papers with code

Training and Evaluation of Deep Policies using Reinforcement Learning and Generative Models

no code implementations18 Apr 2022 Ali Ghadirzadeh, Petra Poklukar, Karol Arndt, Chelsea Finn, Ville Kyrki, Danica Kragic, Mårten Björkman

We present a data-efficient framework for solving sequential decision-making problems which exploits the combination of reinforcement learning (RL) and latent variable generative models.

Decision Making reinforcement-learning +2

GraphDCA -- a Framework for Node Distribution Comparison in Real and Synthetic Graphs

no code implementations8 Feb 2022 Ciwan Ceylan, Petra Poklukar, Hanna Hultin, Alexander Kravchenko, Anastasia Varava, Danica Kragic

We argue that when comparing two graphs, the distribution of node structural features is more informative than global graph statistics which are often used in practice, especially to evaluate graph generative models.

Geometric Multimodal Contrastive Representation Learning

1 code implementation7 Feb 2022 Petra Poklukar, Miguel Vasco, Hang Yin, Francisco S. Melo, Ana Paiva, Danica Kragic

Learning representations of multimodal data that are both informative and robust to missing modalities at test time remains a challenging problem due to the inherent heterogeneity of data obtained from different channels.

reinforcement Learning Representation Learning

Batch Curation for Unsupervised Contrastive Representation Learning

no code implementations19 Aug 2021 Michael C. Welle, Petra Poklukar, Danica Kragic

The state-of-the-art unsupervised contrastive visual representation learning methods that have emerged recently (SimCLR, MoCo, SwAV) all make use of data augmentations in order to construct a pretext task of instant discrimination consisting of similar and dissimilar pairs of images.

Representation Learning

GeomCA: Geometric Evaluation of Data Representations

1 code implementation26 May 2021 Petra Poklukar, Anastasia Varava, Danica Kragic

Evaluating the quality of learned representations without relying on a downstream task remains one of the challenges in representation learning.

Contrastive Learning Representation Learning

FEW-SHOTLEARNING WITH WEAK SUPERVISION

no code implementations ICLR Workshop Learning_to_Learn 2021 Ali Ghadirzadeh, Petra Poklukar, Xi Chen, Huaxiu Yao, Hossein Azizpour, Mårten Björkman, Chelsea Finn, Danica Kragic

Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data.

Meta-Learning Variational Inference

Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms

no code implementations5 Mar 2021 Ali Ghadirzadeh, Xi Chen, Petra Poklukar, Chelsea Finn, Mårten Björkman, Danica Kragic

Our results show that the proposed method can successfully adapt a trained policy to different robotic platforms with novel physical parameters and the superiority of our meta-learning algorithm compared to state-of-the-art methods for the introduced few-shot policy adaptation problem.

Meta-Learning

Enabling Visual Action Planning for Object Manipulation through Latent Space Roadmap

1 code implementation3 Mar 2021 Martina Lippi, Petra Poklukar, Michael C. Welle, Anastasia Varava, Hang Yin, Alessandro Marino, Danica Kragic

We present a framework for visual action planning of complex manipulation tasks with high-dimensional state spaces, focusing on manipulation of deformable objects.

Data-efficient visuomotor policy training using reinforcement learning and generative models

no code implementations26 Jul 2020 Ali Ghadirzadeh, Petra Poklukar, Ville Kyrki, Danica Kragic, Mårten Björkman

We present a data-efficient framework for solving visuomotor sequential decision-making problems which exploits the combination of reinforcement learning (RL) and latent variable generative models.

Decision Making Disentanglement +3

Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation

1 code implementation19 Mar 2020 Martina Lippi, Petra Poklukar, Michael C. Welle, Anastasiia Varava, Hang Yin, Alessandro Marino, Danica Kragic

We present a framework for visual action planning of complex manipulation tasks with high-dimensional state spaces such as manipulation of deformable objects.

Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models

no code implementations pproximateinference AABI Symposium 2019 Petra Poklukar, Judith Bütepage, Danica Kragic

Recent findings show that deep generative models can judge out-of-distribution samples as more likely than those drawn from the same distribution as the training data.

Cannot find the paper you are looking for? You can Submit a new open access paper.