no code implementations • • Dimitar Shterionov, Mirella De Sisto, Vincent Vandeghinste, Aoife Brady, Mathieu De Coster, Lorraine Leeson, Josep Blat, Frankie Picron, Marcello Paolo Scipioni, Aditya Parikh, Louis ten Bosh, John O’Flaherty, Joni Dambre, Jorn Rijckaert
The SignON project (www. signon-project. eu) focuses on the research and development of a Sign Language (SL) translation mobile application and an open communications framework.
In this work we propose a methodology to accurately evaluate and compare the performance of efficient neural network building blocks for computer vision in a hardware-aware manner.
Automatic translation from signed to spoken languages is an interdisciplinary research domain, lying on the intersection of computer vision, machine translation and linguistics.
Our results show that pretrained language models can be used to improve sign language translation performance and that the self-attention patterns in BERT transfer in zero-shot to the encoder and decoder of sign language translation models.
However, due to the limited amount of labeled data that is commonly available for training automatic sign (language) recognition, the VTN cannot reach its full potential in this domain.
Ranked #2 on Sign Language Recognition on AUTSL
Using this framework, the potential of Hebbian learned feature extractors for image classification is illustrated.
One recent approach to meta reinforcement learning (meta-RL) is to integrate models for task inference with models for control.
Sign language recognition can be used to speed up the annotation process of these corpora, in order to aid research into sign languages and sign language recognition.
Using the FORCE learning paradigm, we train a reservoir of spiking neuron populations to act as a central pattern generator.
Using optical hardware for neuromorphic computing has become more and more popular recently due to its efficient high-speed data processing capabilities and low power consumption.
In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations.
We present a novel model architecture which leverages deep learning tools to perform exact Bayesian inference on sets of high dimensional, complex observations.
A DReLU, which comes with an unbounded positive and negative image, can be used as a drop-in replacement for a tanh activation function in the recurrent step of Quasi-Recurrent Neural Networks (QRNNs) (Bradbury et al. (2017)).
We consider the problem of face swapping in images, where an input identity is transformed into a target identity while preserving pose, facial expression, and lighting.
Currently, robots are often treated as a black box in this optimization process, which is the reason why derivative-free optimization methods such as evolutionary algorithms or reinforcement learning are omnipresent.
Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition.
Ranked #1 on Gesture Recognition on Montalbano
Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images.
We perform physical experiments that demonstrate that the obtained input encodings work well in reality, and we show that optimized systems perform significantly better than the common Reservoir Computing approach.
Machine learning algorithms, and more in particular neural networks, arguably experience a revolution in terms of performance.
In this work, we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing.