no code implementations • 27 Mar 2023 • Huajian Fang, Niklas Wittmer, Johannes Twiefel, Stefan Wermter, Timo Gerkmann
In this paper, we propose a multichannel partially adaptive scheme to jointly model ego-noise and environmental noise utilizing the VAE-NMF framework, where we take advantage of spatially and spectrally structured characteristics of ego-noise by pre-training the ego-noise model, while retaining the ability to adapt to unknown environmental noise.
no code implementations • 14 Mar 2023 • Xufeng Zhao, Mengdi Li, Cornelius Weber, Muhammad Burhan Hafez, Stefan Wermter
Programming robot behaviour in a complex world faces challenges on multiple levels, from dextrous low-level skills to high-level planning and reasoning.
no code implementations • 7 Mar 2023 • Mostafa Kotb, Cornelius Weber, Stefan Wermter
Model-based reinforcement learning (MBRL) with real-time planning has shown great potential in locomotion and manipulation control tasks.
no code implementations • 20 Feb 2023 • Leyuan Qu, Cornelius Weber, Stefan Wermter
Furthermore, our proposed combined loss rescaling and weight consolidation methods can support continual learning of an ASR system.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • 1 Feb 2023 • Mengdi Li, Xufeng Zhao, Jae Hee Lee, Cornelius Weber, Stefan Wermter
We study a class of reinforcement learning problems where the reward signals for policy learning are generated by a discriminator that is dependent on and jointly optimized with the policy.
no code implementations • 9 Jan 2023 • Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Muhammad Burhan Hafez, Patrick Bruns, Stefan Wermter
Only occasionally, a learning infant would receive a matching verbal description of an action it is committing, which is similar to supervised learning.
no code implementations • 14 Dec 2022 • Leyuan Qu, Taihao Li, Cornelius Weber, Theresa Pekarek-Rosin, Fuji Ren, Stefan Wermter
Human speech can be characterized by different components, including semantic content, speaker identity and prosodic information.
1 code implementation • 8 Dec 2022 • Björn Plüster, Jakob Ambsdorf, Lukas Braach, Jae Hee Lee, Stefan Wermter
Natural language explanations promise to offer intuitively understandable explanations of a neural network's decision process in complex vision-language tasks, as pursued in recent VL-NLE models.
Ranked #1 on
Explanation Generation
on VQA-X
no code implementations • 28 Nov 2022 • Jae Hee Lee, Michael Sioutis, Kyra Ahrens, Marjan Alirezaie, Matthias Kerzel, Stefan Wermter
In this chapter, we view this integration problem from the perspective of Neuro-Symbolic AI.
no code implementations • 23 Nov 2022 • Niclas Schroeter, Francisco Cruz, Stefan Wermter
Results obtained show the viability of introspection for episodic robotics tasks and, additionally, that the introspection-based approach can be used to generate explanations for the actions taken in a non-episodic robotics environment as well.
1 code implementation • 23 Nov 2022 • Hugo Carneiro, Cornelius Weber, Stefan Wermter
Finally, we devise a model for emotion recognition in conversations trained on the realigned MELD-FAIR videos, which outperforms state-of-the-art models for ERC based on vision alone.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
1 code implementation • 22 Nov 2022 • Yuan YAO, Tianyu Yu, Ao Zhang, Mengdi Li, Ruobing Xie, Cornelius Weber, Zhiyuan Liu, Hai-Tao Zheng, Stefan Wermter, Tat-Seng Chua, Maosong Sun
In this work, we present CLEVER, which formulates CKE as a distantly supervised multi-instance learning problem, where models learn to summarize commonsense relations from a bag of images about an entity pair without any human annotation on image instances.
no code implementations • 16 Nov 2022 • Leyuan Qu, Wei Wang, Taihao Li, Cornelius Weber, Stefan Wermter, Fuji Ren
Once training is completed, EmoAug enriches expressions of emotional speech in different prosodic attributes, such as stress, rhythm and intensity, by feeding different styles into the paralinguistic encoder.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • 14 Oct 2022 • Nima Rahrakhshan, Matthias Kerzel, Philipp Allgeuer, Nicolas Duczek, Stefan Wermter
The act of reaching for an object is a fundamental yet complex skill for a robotic agent, requiring a high degree of visuomotor control and coordination.
no code implementations • 18 Aug 2022 • Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D. H. Nguyen, Martin V. Butz, Stefan Wermter
According to cognitive psychology and related disciplines, the development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 4 Aug 2022 • Xufeng Zhao, Cornelius Weber, Muhammad Burhan Hafez, Stefan Wermter
Sound is one of the most informative and abundant modalities in the real world while being robust to sense without contacts by small and cheap sensors that can be placed on mobile devices.
no code implementations • 15 Jul 2022 • Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter
In this work, we propose the paired gated autoencoders (PGAE) for flexible translation between robot actions and language descriptions in a tabletop object manipulation scenario.
1 code implementation • 6 Jul 2022 • Kyra Ahrens, Matthias Kerzel, Jae Hee Lee, Cornelius Weber, Stefan Wermter
Spatial reasoning poses a particular challenge for intelligent agents and is at the same time a prerequisite for their successful interaction and communication in the physical world.
1 code implementation • International Joint Conference on Artificial Intelligence 2021 • Fares Abawi, Tom Weber, Stefan Wermter
We show that gaze direction and affective representations contribute a prediction to ground-truth correspondence improvement of at least 5% compared to dynamic saliency models without social cues.
1 code implementation • 28 May 2022 • Hassan Ali, Doreen Jirak, Stefan Wermter
Our architecture enables learning both static and dynamic gestures: by capturing a so-called "snapshot" of the gesture performance at its peak, we integrate the hand pose along with the dynamic movement.
no code implementations • LREC 2022 • Chandrakant Bothe, Stefan Wermter
One of the fundamental cues is politeness, which linguistically possesses properties such as social manners useful in conversational analysis.
1 code implementation • 5 May 2022 • Jae Hee Lee, Matthias Kerzel, Kyra Ahrens, Cornelius Weber, Stefan Wermter
Grounding relative directions is more difficult than grounding absolute directions because it not only requires a model to detect objects in the image and to identify spatial relation based on this information, but it also needs to recognize the orientation of objects and integrate this information into the reasoning process.
no code implementations • 9 Apr 2022 • Jakob Ambsdorf, Alina Munir, Yiyao Wei, Klaas Degkwitz, Harm Matthias Harms, Susanne Stannek, Kyra Ahrens, Dennis Becker, Erik Strahl, Tom Weber, Stefan Wermter
However, the results show that the robot that explains its moves is perceived as more lively and human-like.
2 code implementations • 8 Apr 2022 • Frank Röder, Manfred Eppe, Stefan Wermter
We show that hindsight instructions improve the learning performance, as expected.
no code implementations • 4 Mar 2022 • Huajian Fang, Tal Peer, Stefan Wermter, Timo Gerkmann
Speech enhancement in the time-frequency domain is often performed by estimating a multiplicative mask to extract clean speech.
no code implementations • LREC 2022 • Gerald Schwiebert, Cornelius Weber, Leyuan Qu, Henrique Siqueira, Stefan Wermter
Large datasets as required for deep learning of lip reading do not exist in many languages.
no code implementations • 17 Jan 2022 • Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter
Human infants learn language while interacting with their environment in which their caregivers may describe the objects and actions they perform.
no code implementations • 9 Dec 2021 • Leyuan Qu, Cornelius Weber, Stefan Wermter
The aim of this work is to investigate the impact of crossmodal self-supervised pre-training for speech reconstruction (video-to-audio) by leveraging the natural co-occurrence of audio and visual streams in videos.
1 code implementation • 11 Nov 2021 • Vadym Gryshchuk, Cornelius Weber, Chu Kiong Loo, Stefan Wermter
Lifelong learning is a long-standing aim for artificial agents that act in dynamic environments, in which an agent needs to accumulate knowledge incrementally without forgetting previously learned representations.
no code implementations • 2 Nov 2021 • Di Fu, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik Strahl, Xun Liu, Stefan Wermter
Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study.
no code implementations • 8 Oct 2021 • Siqi Cao, Di Fu, Xu Yang, Stefan Wermter, Xun Liu, Haiyan Wu
Furthermore, we discuss challenges for responsible evaluation of cognitive methods and computational techniques and show approaches to future work to contribute to affective assistants capable of empathy.
no code implementations • 1 Sep 2021 • Hugo Carneiro, Cornelius Weber, Stefan Wermter
The strong relation between face and voice can aid active speaker detection systems when faces are visible, even in difficult settings, when the face of a speaker is not clear or when there are several people in the same scene.
no code implementations • 3 Aug 2021 • Aaron Eisermann, Jae Hee Lee, Cornelius Weber, Stefan Wermter
Neural networks can be powerful function approximators, which are able to model high-dimensional feature distributions from a subset of examples drawn from the target distribution.
no code implementations • 9 Jul 2021 • Muhammad Burhan Hafez, Stefan Wermter
Task inference is made by finding the nearest behavior embedding to a demonstrated behavior, which is used together with the environment state as input to a multi-task policy trained with reinforcement learning to optimize performance over tasks.
no code implementations • 1 Jul 2021 • Hadi Beik-Mohammadi, Matthias Kerzel, Benedikt Pleintinger, Thomas Hulin, Philipp Reisich, Annika Schmidt, Aaron Pereira, Stefan Wermter, Neal Y. Lii
Telerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays.
1 code implementation • 18 May 2021 • Kyra Ahrens, Fares Abawi, Stefan Wermter
Continual or lifelong learning has been a long-standing challenge in machine learning to date, especially in natural language processing (NLP).
no code implementations • 12 Apr 2021 • Victor Uc-Cetina, Nicolas Navarro-Guerrero, Anabel Martin-Gonzalez, Cornelius Weber, Stefan Wermter
In recent years some researchers have explored the use of reinforcement learning (RL) algorithms as key components in the solution of various natural language processing tasks.
1 code implementation • ICCV 2021 • Yuan YAO, Ao Zhang, Xu Han, Mengdi Li, Cornelius Weber, Zhiyuan Liu, Stefan Wermter, Maosong Sun
In this work, we propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without any human-labeled data.
no code implementations • 23 Mar 2021 • Henrique Siqueira, Pablo Barros, Sven Magg, Cornelius Weber, Stefan Wermter
In domains where computational resources and labeled data are limited, such as in robotics, deep networks with millions of weights might not be the optimal solution.
no code implementations • 5 Mar 2021 • Henrique Siqueira, Pablo Barros, Sven Magg, Stefan Wermter
Social robots able to continually learn facial expressions could progressively improve their emotion recognition capability towards people interacting with them.
no code implementations • 5 Mar 2021 • Henrique Siqueira, Alexander Sutherland, Pablo Barros, Mattias Kerzel, Sven Magg, Stefan Wermter
In this paper, we utilize the NICO robot's appearance and capabilities to give the NICO the ability to model a coherent affective association between a perceived auditory stimulus and a temporally asynchronous emotion expression.
no code implementations • 19 Feb 2021 • Nicolas Duczek, Matthias Kerzel, Stefan Wermter
In a practical scenario, a physical exercise is performed by an expert like a physiotherapist and then used as a reference for a humanoid robot like Pepper to give feedback on a patient's execution of the same exercise.
no code implementations • 17 Feb 2021 • Huajian Fang, Guillaume Carbajal, Stefan Wermter, Timo Gerkmann
Recently, a generative variational autoencoder (VAE) has been proposed for speech enhancement to model speech statistics.
1 code implementation • 10 Feb 2021 • Julien Scholz, Cornelius Weber, Muhammad Burhan Hafez, Stefan Wermter
Using a model of the environment, reinforcement learning agents can plan their future moves and achieve superhuman performance in board games like Chess, Shogi, and Go, while remaining relatively sample-efficient.
1 code implementation • 5 Feb 2021 • Tobias Hinz, Matthew Fisher, Oliver Wang, Eli Shechtman, Stefan Wermter
Our model generates novel poses based on keypoint locations, which can be modified in real time while providing interactive feedback, allowing for intuitive reposing and animation.
no code implementations • 18 Dec 2020 • Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D. H. Nguyen, Martin V. Butz, Stefan Wermter
We then relate these insights with contemporary hierarchical reinforcement learning methods, and identify the key machine intelligence approaches that realise these mechanisms.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
no code implementations • 25 Nov 2020 • Phuong D. H. Nguyen, Yasmin Kim Georgie, Ezgi Kayhan, Manfred Eppe, Verena Vanessa Hafner, Stefan Wermter
Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations.
no code implementations • 13 Nov 2020 • Phuong D. H. Nguyen, Manfred Eppe, Stefan Wermter
Cognitive science suggests that the self-representation is critical for learning and problem-solving.
no code implementations • 11 Nov 2020 • Thilo Fryen, Manfred Eppe, Phuong D. H. Nguyen, Timo Gerkmann, Stefan Wermter
Reinforcement learning is a promising method to accomplish robotic control tasks.
no code implementations • 14 Oct 2020 • Nikhil Churamani, Pablo Barros, Hatice Gunes, Stefan Wermter
Collaborative interactions require social robots to adapt to the dynamics of human affective behaviour.
no code implementations • 9 Oct 2020 • Tom Weber, Stefan Wermter
However, not only can humans benefit from a robot's explanation but the robot itself can also benefit from explanations given to him.
1 code implementation • 26 Sep 2020 • Matthias Kerzel, Fares Abawi, Manfred Eppe, Stefan Wermter
In this follow-up study, we expand the task and the model to reaching for objects in a three-dimensional space with a novel dataset based on augmented reality and a simulation environment.
1 code implementation • 24 Jun 2020 • Stefan Heinrich, Yuan YAO, Tobias Hinz, Zhiyuan Liu, Thomas Hummel, Matthias Kerzel, Cornelius Weber, Stefan Wermter
From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration.
no code implementations • 22 Jun 2020 • Alexandra Lindt, Pablo Barros, Henrique Siqueira, Stefan Wermter
Recently deep generative models have achieved impressive results in the field of automated facial expression editing.
no code implementations • 20 Jun 2020 • Junpei Zhong, Angelo Cangelosi, Stefan Wermter
During the learning process of observing sensorimotor primitives, i. e. observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units.
no code implementations • 17 May 2020 • Leyuan Qu, Cornelius Weber, Stefan Wermter
Target speech separation refers to isolating target speech from a multi-speaker mixture signal by conditioning on auxiliary information about the target speaker.
Audio and Speech Processing Sound
1 code implementation • 7 May 2020 • Frank Röder, Manfred Eppe, Phuong D. H. Nguyen, Stefan Wermter
Hierarchical abstraction and curiosity-driven exploration are two common paradigms in current reinforcement learning approaches to break down difficult problems into a sequence of simpler ones and to overcome reward sparsity.
no code implementations • 21 Apr 2020 • Fatai Sado, Chu Kiong Loo, Wei Shiung Liew, Matthias Kerzel, Stefan Wermter
The recent stance on the explainability of AI systems has witnessed several approaches on eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences.
1 code implementation • 19 Apr 2020 • Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter
In this paper, we present a novel dual-system motor learning approach where a meta-controller arbitrates online between model-based and model-free decisions based on an estimate of the local reliability of the learned model.
3 code implementations • 25 Mar 2020 • Tobias Hinz, Matthew Fisher, Oliver Wang, Stefan Wermter
Recently there has been an interest in the potential of learning generative models from a single image, as opposed to from a large dataset.
1 code implementation • 17 Jan 2020 • Henrique Siqueira, Sven Magg, Stefan Wermter
Experiments on large-scale datasets suggest that ESRs reduce the remaining residual generalization error on the AffectNet and FER+ datasets, reach human-level performance, and outperform state-of-the-art methods on facial expression recognition in the wild using emotion and affect concepts.
Ranked #9 on
Facial Expression Recognition (FER)
on FER+
(using extra training data)
1 code implementation • 13 Dec 2019 • Doreen Jirak, David Biertimpel, Matthias Kerzel, Stefan Wermter
The implementation of an intuitive gesture scenario is still challenging because both the pointing intention and the corresponding object have to be correctly recognized in real-time.
2 code implementations • LREC 2020 • Chandrakant Bothe, Cornelius Weber, Sven Magg, Stefan Wermter
These neural models annotate the emotion corpora with dialogue act labels, and an ensemble annotator extracts the final dialogue act label.
2 code implementations • 29 Oct 2019 • Tobias Hinz, Stefan Heinrich, Stefan Wermter
To address these challenges we introduce a new model that explicitly models individual objects within an image and a new evaluation metric called Semantic Object Accuracy (SOA) that specifically evaluates images given an image caption.
Ranked #48 on
Text-to-Image Generation
on COCO
no code implementations • 10 Oct 2019 • Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter
The learned models are used to generate imagined experiences, augmenting the training set of real experiences.
no code implementations • 5 Sep 2019 • Di Fu, Cornelius Weber, Guochun Yang, Matthias Kerzel, Weizhi Nan, Pablo Barros, Haiyan Wu, Xun Liu, Stefan Wermter
Selective attention plays an essential role in information acquisition and utilization from the environment.
1 code implementation • 2 Sep 2019 • Sayantan Auddy, Sven Magg, Stefan Wermter
Artificial central pattern generators (CPGs) can produce synchronized joint movements and have been used in the past for bipedal locomotion.
no code implementations • 30 Aug 2019 • Pablo Barros, Nikhil Churamani, Angelica Lim, Stefan Wermter
In this paper, we propose a novel dataset composed of dyadic interactions designed, collected and annotated with a focus on measuring the affective impact that eight different stories have on the listener.
1 code implementation • 21 Aug 2019 • Marcus Soll, Tobias Hinz, Sven Magg, Stefan Wermter
Adversarial examples are artificially modified input samples which lead to misclassifications, while not being detectable by humans.
1 code implementation • 2 Aug 2019 • Pablo Barros, Stefan Wermter, Alessandra Sciutti
While interacting with another person, our reactions and behavior are much affected by the emotional changes within the temporal context of the interaction.
no code implementations • SEMEVAL 2019 • Ch Bothe, rakant, Stefan Wermter
When reading {``}I don{'}t want to talk to you any more{''}, we might interpret this as either an angry or a sad emotion in the absence of context.
no code implementations • 23 May 2019 • Manfred Eppe, Phuong D. H. Nguyen, Stefan Wermter
In this article, we build on these novel methods to facilitate the integration of action planning with reinforcement learning by exploiting the reward-sparsity as a bridge between the high-level and low-level state- and control spaces.
no code implementations • 5 May 2019 • Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter
Recent success in deep reinforcement learning for continuous control has been dominated by model-free approaches which, unlike model-based approaches, do not suffer from representational limitations in making assumptions about the world dynamics and model errors inevitable in complex domains.
no code implementations • 23 Apr 2019 • Pablo Barros, German I. Parisi, Stefan Wermter
Recent models of emotion recognition strongly rely on supervised deep learning solutions for the distinction of general emotion expressions.
no code implementations • 15 Apr 2019 • Francisco Cruz, Sven Magg, Yukie Nagai, Stefan Wermter
Interactive reinforcement learning has become an important apprenticeship approach to speed up convergence in classic reinforcement learning problems.
1 code implementation • EMNLP 2018 • Egor Lakomkin, Sven Magg, Cornelius Weber, Stefan Wermter
In this paper, we describe KT-Speech-Crawler: an approach for automatic dataset construction for speech recognition by crawling YouTube videos.
no code implementations • 28 Feb 2019 • Egor Lakomkin, Mohammad Ali Zamani, Cornelius Weber, Sven Magg, Stefan Wermter
We argue that using ground-truth transcriptions during training and evaluation phases leads to a significant discrepancy in performance compared to real-world conditions, as the spoken text has to be recognized on the fly and can contain speech recognition mistakes.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
1 code implementation • ICLR 2019 • Tobias Hinz, Stefan Heinrich, Stefan Wermter
Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations.
Ranked #60 on
Text-to-Image Generation
on COCO
no code implementations • 6 Nov 2018 • German I. Parisi, Xu Ji, Stefan Wermter
Lifelong learning capabilities are crucial for artificial autonomous agents operating on real-world data, which is typically non-stationary and temporally correlated.
no code implementations • 26 Oct 2018 • Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter
In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input.
no code implementations • 15 Oct 2018 • Di Fu, Pablo Barros, German I. Parisi, Haiyan Wu, Sven Magg, Xun Liu, Stefan Wermter
The efficient integration of multisensory observations is a key property of the brain that yields the robust interaction with the environment.
no code implementations • 27 Sep 2018 • Pablo Barros, German I. Parisi, Manfred Eppe, Stefan Wermter
The model adapts concepts of expectation learning to enhance the unisensory representation based on the learned bindings.
no code implementations • 19 Sep 2018 • Chandrakant Bothe, Fernando Garcia, Arturo Cruz Maya, Amit Kumar Pandey, Stefan Wermter
Service robots need to show appropriate social behaviour in order to be deployed in social environments such as healthcare, education, retail, etc.
no code implementations • 17 Sep 2018 • Manfred Eppe, Sven Magg, Stefan Wermter
Deep reinforcement learning has recently gained a focus on problems where policy or value functions are independent of goals.
2 code implementations • 1 Aug 2018 • Pablo Barros, Emilia Barakova, Stefan Wermter
We evaluate the performance of the proposed model with different challenging corpora and compare it with state-of-the-art models for external emotion appraisal.
no code implementations • 26 Jul 2018 • Francisco Cruz, German I. Parisi, Stefan Wermter
Additionally, we modulate the influence of sensory-driven feedback in the IRL task using goal-oriented knowledge in terms of contextual affordances.
no code implementations • 19 Jul 2018 • Tobias Hinz, Nicolás Navarro-Guerrero, Sven Magg, Stefan Wermter
This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.
no code implementations • 13 Jul 2018 • German I. Parisi, Jonathan Tong, Pablo Barros, Brigitte Röder, Stefan Wermter
In the associative layer, congruent audiovisual representations are obtained via the experience-driven development of feature-based associations.
no code implementations • 3 Jul 2018 • Manfred Eppe, Matthias Kerzel, Erik Strahl, Stefan Wermter
We present a novel approach for interactive auditory object analysis with a humanoid robot.
1 code implementation • 29 Jun 2018 • Chandrakant Bothe, Sven Magg, Cornelius Weber, Stefan Wermter
Spoken language understanding is one of the key factors in a dialogue system, and a context in a conversation plays an important role to understand the current utterance.
1 code implementation • 28 May 2018 • German I. Parisi, Jun Tani, Cornelius Weber, Stefan Wermter
Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience.
1 code implementation • 16 May 2018 • Chandrakant Bothe, Sven Magg, Cornelius Weber, Stefan Wermter
Recent approaches for dialogue act recognition have shown that context from preceding utterances is important to classify the subsequent one.
Ranked #9 on
Dialogue Act Classification
on Switchboard corpus
1 code implementation • LREC 2018 • Chandrakant Bothe, Cornelius Weber, Sven Magg, Stefan Wermter
Dialogue act recognition is an important part of natural language understanding.
Ranked #10 on
Dialogue Act Classification
on Switchboard corpus
no code implementations • 6 Apr 2018 • Egor Lakomkin, Mohammad Ali Zamani, Cornelius Weber, Sven Magg, Stefan Wermter
Speech emotion recognition (SER) is an important aspect of effective human-robot collaboration and received a lot of attention from the research community.
no code implementations • 3 Apr 2018 • Egor Lakomkin, Mohammad Ali Zamani, Cornelius Weber, Sven Magg, Stefan Wermter
Acoustically expressed emotions can make communication with a robot more efficient.
no code implementations • IJCNLP 2017 • Egor Lakomkin, Cornelius Weber, Sven Magg, Stefan Wermter
Acoustic emotion recognition aims to categorize the affective state of the speaker and is still a difficult task for machine learning models.
no code implementations • EACL 2017 • Egor Lakomkin, Cornelius Weber, Stefan Wermter
In this work, we tackle a problem of speech emotion classification.
no code implementations • 30 Mar 2018 • Egor Lakomkin, Chandrakant Bothe, Stefan Wermter
Given the text of a tweet and its emotion category (anger, joy, fear, and sadness), the participants were asked to build a system that assigns emotion intensity values.
no code implementations • 28 Mar 2018 • Tobias Hinz, Stefan Wermter
We train an encoder to encode images into these representations and use a small amount of labeled data to specify what kind of information should be encoded in the disentangled part.
no code implementations • 14 Mar 2018 • Pablo Barros, Nikhil Churamani, Egor Lakomkin, Henrique Siqueira, Alexander Sutherland, Stefan Wermter
This paper is the basis paper for the accepted IJCNN challenge One-Minute Gradual-Emotion Recognition (OMG-Emotion) by which we hope to foster long-emotion classification using neural models for the benefit of the IJCNN community.
Human-Computer Interaction
2 code implementations • 7 Mar 2018 • Tobias Hinz, Stefan Wermter
Combining Generative Adversarial Networks (GANs) with encoders that learn to encode data points has shown promising results in learning data representations in an unsupervised way.
Ranked #4 on
Unsupervised Image Classification
on MNIST
Representation Learning
Unsupervised Image Classification
+1
no code implementations • 21 Feb 2018 • German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, Stefan Wermter
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan.
no code implementations • 19 Feb 2018 • Jonathan Tong, German I. Parisi, Stefan Wermter, Brigitte Röder
Furthermore, we propose that these unisensory and multisensory neurons play dual roles in i) encoding spatial location as separate or integrated estimates and ii) accumulating evidence for the independence or relatedness of multisensory stimuli.
no code implementations • 23 Jan 2018 • Pablo Barros, German I. Parisi, Di Fu, Xun Liu, Stefan Wermter
The human brain is able to learn, generalize, and predict crossmodal stimuli.
no code implementations • 22 Dec 2017 • Luiza Mici, German I. Parisi, Stefan Wermter
During visuomotor tasks, robots must compensate for temporal delays inherent in their sensorimotor processing systems.
no code implementations • 5 Oct 2017 • Luiza Mici, German I. Parisi, Stefan Wermter
We show that our unsupervised model shows competitive classification results on the benchmark dataset with respect to strictly supervised approaches.
no code implementations • WS 2017 • Egor Lakomkin, Ch Bothe, rakant, Stefan Wermter
Given the text of a tweet and its emotion category (anger, joy, fear, and sadness), the participants were asked to build a system that assigns emotion intensity values.
no code implementations • 7 Jun 2017 • Marian Tietz, Tayfun Alpay, Johannes Twiefel, Stefan Wermter
Ladder networks are a notable new concept in the field of semi-supervised learning by showing state-of-the-art results in image recognition tasks while being compatible with many existing neural architectures.
no code implementations • 24 Mar 2017 • Stefan Heinrich, Stefan Wermter
For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about socio-cultural conditions, and insights about activity patterns in the brain.