5 code implementations • 20 Sep 2018 • Gino Brunner, Yuyi Wang, Roger Wattenhofer, Sumu Zhao
In this paper we apply such a model to symbolic music and show the feasibility of our approach for music genre transfer.
1 code implementation • 21 Sep 2018 • Gino Brunner, Bence Szebedy, Simon Tanner, Roger Wattenhofer
The drop-off location could, e. g., be on a balcony or porch, and simply needs to be indicated by a visual marker on the wall or window.
Robotics Systems and Control
1 code implementation • 21 Nov 2017 • Gino Brunner, Yuyi Wang, Roger Wattenhofer, Jonas Wiesendanger
First, a chord LSTM predicts a chord progression based on a chord embedding.
1 code implementation • 20 Nov 2017 • Gino Brunner, Oliver Richter, Yuyi Wang, Roger Wattenhofer
Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community.
1 code implementation • 5 Jul 2019 • Timo Bram, Gino Brunner, Oliver Richter, Roger Wattenhofer
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting.
1 code implementation • 30 Sep 2018 • Gino Brunner, Manuel Fritsche, Oliver Richter, Roger Wattenhofer
Learning in sparse reward settings remains a challenge in Reinforcement Learning, which is often addressed by using intrinsic rewards.
1 code implementation • 12 Jan 2021 • Sumu Zhao, Damian Pascual, Gino Brunner, Roger Wattenhofer
In this work we provide new insights into the transformer architecture, and in particular, its best-known variant, BERT.
no code implementations • 18 Jan 2018 • Gino Brunner, Yuyi Wang, Roger Wattenhofer, Michael Weigelt
We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations.
no code implementations • 20 Sep 2018 • Gino Brunner, Andres Konrad, Yuyi Wang, Roger Wattenhofer
The interpolations smoothly change pitches, dynamics and instrumentation to create a harmonic bridge between two music pieces.
no code implementations • ICLR 2020 • Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer
We show that, for sequences longer than the attention head dimension, attention weights are not identifiable.
no code implementations • EACL 2021 • Damian Pascual, Gino Brunner, Roger Wattenhofer
This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles.
no code implementations • 25 Aug 2020 • Lukas Faber, Sandro Luck, Damian Pascual, Andreas Roth, Gino Brunner, Roger Wattenhofer
The automatic generation of medleys, i. e., musical pieces formed by different songs concatenated via smooth transitions, is not well studied in the current literature.