Search Results for author: Yordan Hristov

Found 9 papers, 1 papers with code

FAN-Trans: Online Knowledge Distillation for Facial Action Unit Detection

no code implementations11 Nov 2022 Jing Yang, Jie Shen, Yiming Lin, Yordan Hristov, Maja Pantic

Our model consists of a hybrid network of convolution and transformer blocks to learn per-AU features and to model AU co-occurrences.

Action Unit Detection Face Alignment +2

Learning from Demonstration with Weakly Supervised Disentanglement

no code implementations ICLR 2021 Yordan Hristov, Subramanian Ramamoorthy

We show that such alignment is best achieved through the use of labels from the end user, in an appropriately restricted vocabulary, in contrast to the conventional approach of the designer picking a prior over the latent variables.

Disentanglement Reading Comprehension +1

Composing Diverse Policies for Temporally Extended Tasks

no code implementations18 Jul 2019 Daniel Angelov, Yordan Hristov, Michael Burke, Subramanian Ramamoorthy

Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics.

Hierarchical Reinforcement Learning Motion Planning

Hybrid system identification using switching density networks

1 code implementation9 Jul 2019 Michael Burke, Yordan Hristov, Subramanian Ramamoorthy

This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification.

Imitation Learning regression

DynoPlan: Combining Motion Planning and Deep Neural Network based Controllers for Safe HRL

no code implementations24 Jun 2019 Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy

Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke primitives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states.

Robotics

Using Causal Analysis to Learn Specifications from Task Demonstrations

no code implementations4 Mar 2019 Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy

In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space.

Clustering

Interpretable Latent Spaces for Learning from Demonstration

no code implementations17 Jul 2018 Yordan Hristov, Alex Lascarides, Subramanian Ramamoorthy

Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world.

Grounding Symbols in Multi-Modal Instructions

no code implementations WS 2017 Yordan Hristov, Svetlin Penkov, Alex Lascarides, Subramanian Ramamoorthy

As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability---for instance, learning to ground symbols in the physical world.

Cannot find the paper you are looking for? You can Submit a new open access paper.