Search Results for author: Arnu Pretorius

Found 14 papers, 5 papers with code

Causal Multi-Agent Reinforcement Learning: Review and Open Problems

no code implementations12 Nov 2021 St John Grimbly, Jonathan Shock, Arnu Pretorius

This paper serves to introduce the reader to the field of multi-agent reinforcement learning (MARL) and its intersection with methods from the study of causality.

Multi-agent Reinforcement Learning

On pseudo-absence generation and machine learning for locust breeding ground prediction in Africa

no code implementations6 Nov 2021 Ibrahim Salihu Yusuf, Kale-ab Tessera, Thomas Tumiel, Sella Nevo, Arnu Pretorius

In this paper, we compare this random sampling approach to more advanced pseudo-absence generation methods, such as environmental profiling and optimal background extent limitation, specifically for predicting desert locust breeding grounds in Africa.

Towards Learning to Speak and Hear Through Multi-Agent Communication over a Continuous Acoustic Channel

no code implementations4 Nov 2021 Kevin Eloff, Arnu Pretorius, Okko Räsänen, Herman A. Engelbrecht, Herman Kamper

The Speaker is equipped with a vocoder that maps symbols to a continuous waveform, this is passed over a lossy continuous channel, and the Listener needs to map the continuous signal to the concept.

Multi-agent Reinforcement Learning Q-Learning

Robust and Scalable SDE Learning: A Functional Perspective

no code implementations11 Oct 2021 Scott Cameron, Tyron Cameron, Arnu Pretorius, Stephen Roberts

Stochastic differential equations provide a rich class of flexible generative models, capable of describing a wide range of spatio-temporal processes.

On Optimal Transformer Depth for Low-Resource Language Translation

1 code implementation9 Apr 2020 Elan van Biljon, Arnu Pretorius, Julia Kreutzer

Therefore, by showing that transformer models perform well (and often best) at low-to-moderate depth, we hope to convince fellow researchers to devote less computational resources, as well as time, to exploring overly large models during the development of these systems.

Machine Translation Translation

Stabilising priors for robust Bayesian deep learning

no code implementations23 Oct 2019 Felix McGregor, Arnu Pretorius, Johan du Preez, Steve Kroon

Bayesian neural networks (BNNs) have developed into useful tools for probabilistic modelling due to recent advances in variational inference enabling large scale BNNs.

Variational Inference

If dropout limits trainable depth, does critical initialisation still matter? A large-scale statistical analysis on ReLU networks

no code implementations13 Oct 2019 Arnu Pretorius, Elan van Biljon, Benjamin van Niekerk, Ryan Eloff, Matthew Reynard, Steve James, Benjamin Rosman, Herman Kamper, Steve Kroon

Our results therefore suggest that, in the shallow-to-moderate depth setting, critical initialisation provides zero performance gains when compared to off-critical initialisations and that searching for off-critical initialisations that might improve training speed or generalisation, is likely to be a fruitless endeavour.

On the expected behaviour of noise regularised deep neural networks as Gaussian processes

no code implementations12 Oct 2019 Arnu Pretorius, Herman Kamper, Steve Kroon

Recent work has established the equivalence between deep neural networks and Gaussian processes (GPs), resulting in so-called neural network Gaussian processes (NNGPs).

Gaussian Processes

Critical initialisation for deep signal propagation in noisy rectifier neural networks

1 code implementation NeurIPS 2018 Arnu Pretorius, Elan van Biljon, Steve Kroon, Herman Kamper

Simulations and experiments on real-world data confirm that our proposed initialisation is able to stably propagate signals in deep networks, while using an initialisation disregarding noise fails to do so.

Cannot find the paper you are looking for? You can Submit a new open access paper.