no code implementations • 11 Mar 2024 • Ibrahim Salihu Yusuf, Mukhtar Opeyemi Yusuf, Kobby Panford-Quainoo, Arnu Pretorius
Desert locust swarms present a major threat to agriculture and food security.
no code implementations • 13 Dec 2023 • Wiem Khlifi, Siddarth Singh, Omayma Mahjoub, Ruan de Kock, Abidine Vall, Rihab Gorsane, Arnu Pretorius
Cooperative multi-agent reinforcement learning (MARL) has made substantial strides in addressing the distributed decision-making challenges.
no code implementations • 13 Dec 2023 • Siddarth Singh, Omayma Mahjoub, Ruan de Kock, Wiem Khlifi, Abidine Vall, Kale-ab Tessera, Arnu Pretorius
Establishing sound experimental standards and rigour is important in any growing field of research.
no code implementations • 13 Dec 2023 • Omayma Mahjoub, Ruan de Kock, Siddarth Singh, Wiem Khlifi, Abidine Vall, Kale-ab Tessera, Arnu Pretorius
Measuring the contribution of individual agents is challenging in cooperative multi-agent reinforcement learning (MARL).
no code implementations • 30 Nov 2023 • Kale-ab Tessera, Callum Rhys Tilbury, Sasha Abramowitz, Ruan de Kock, Omayma Mahjoub, Benjamin Rosman, Sara Hooker, Arnu Pretorius
Optimising deep neural networks is a challenging task due to complex training dynamics, high computational requirements, and long training times.
1 code implementation • 29 Nov 2023 • Andries Smit, Paul Duckworth, Nathan Grinsztajn, Thomas D. Barrett, Arnu Pretorius
In this context, multi-agent debate (MAD) has emerged as a promising strategy for enhancing the truthfulness of LLMs.
1 code implementation • 16 Jun 2023 • Clément Bonnet, Daniel Luo, Donal Byrne, Shikha Surana, Sasha Abramowitz, Paul Duckworth, Vincent Coyette, Laurence I. Midgley, Elshadai Tegegn, Tristan Kalloniatis, Omayma Mahjoub, Matthew Macfarlane, Andries P. Smit, Nathan Grinsztajn, Raphael Boige, Cemlyn N. Waters, Mohamed A. Mimouni, Ulrich A. Mbou Sob, Ruan de Kock, Siddarth Singh, Daniel Furelos-Blanco, Victor Le, Arnu Pretorius, Alexandre Laterre
Open-source reinforcement learning (RL) environments have played a crucial role in driving progress in the development of AI algorithms.
1 code implementation • 31 Mar 2023 • Claude Formanek, Callum Rhys Tilbury, Jonathan Shock, Kale-ab Tessera, Arnu Pretorius
'Reincarnation' in reinforcement learning has been proposed as a formalisation of reusing prior computation from past experiments when training an agent in an environment.
2 code implementations • 1 Feb 2023 • Claude Formanek, Asad Jeewa, Jonathan Shock, Arnu Pretorius
However, offline MARL is still in its infancy and therefore lacks standardised benchmark datasets and baselines typically found in more mature subfields of reinforcement learning (RL).
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 21 Sep 2022 • Rihab Gorsane, Omayma Mahjoub, Ruan de Kock, Roland Dubb, Siddarth Singh, Arnu Pretorius
Combining these recommendations, with novel insights from our analysis, we propose a standardised performance evaluation protocol for cooperative MARL.
1 code implementation • 14 Jun 2022 • Matthew Morris, Thomas D. Barrett, Arnu Pretorius
Allowing agents to share information through communication is crucial for solving complex tasks in multi-agent reinforcement learning.
no code implementations • 12 Nov 2021 • St John Grimbly, Jonathan Shock, Arnu Pretorius
This paper serves to introduce the reader to the field of multi-agent reinforcement learning (MARL) and its intersection with methods from the study of causality.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 6 Nov 2021 • Ibrahim Salihu Yusuf, Kale-ab Tessera, Thomas Tumiel, Zohra Slim, Amine Kerkeni, Sella Nevo, Arnu Pretorius
In this paper, we compare this random sampling approach to more advanced pseudo-absence generation methods, such as environmental profiling and optimal background extent limitation, specifically for predicting desert locust breeding grounds in Africa.
no code implementations • 4 Nov 2021 • Kevin Eloff, Okko Räsänen, Herman A. Engelbrecht, Arnu Pretorius, Herman Kamper
Multi-agent reinforcement learning has been used as an effective means to study emergent communication between agents, yet little focus has been given to continuous acoustic communication.
no code implementations • ICLR 2022 • Scott Cameron, Tyron Cameron, Arnu Pretorius, Stephen Roberts
Stochastic differential equations provide a rich class of flexible generative models, capable of describing a wide range of spatio-temporal processes.
1 code implementation • 3 Jul 2021 • Ruan de Kock, Omayma Mahjoub, Sasha Abramowitz, Wiem Khlifi, Callum Rhys Tilbury, Claude Formanek, Andries Smit, Arnu Pretorius
Our criteria for such software is that it should be simple enough to use to implement new ideas quickly, while at the same time be scalable and fast enough to test those ideas in a reasonable amount of time.
no code implementations • 1 Jan 2021 • Arnu Pretorius, Scott Cameron, Andries Petrus Smit, Elan van Biljon, Lawrence Francis, Femi Azeez, Alexandre Laterre, Karim Beguir
Furthermore, the core utility of our imagination is deeply coupled with communication.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • NeurIPS 2020 • Arnu Pretorius, Scott Cameron, Elan van Biljon, Tom Makkink, Shahil Mawjee, Jeremy du Plessis, Jonathan Shock, Alexandre Laterre, Karim Beguir
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
1 code implementation • 9 Apr 2020 • Elan van Biljon, Arnu Pretorius, Julia Kreutzer
Therefore, by showing that transformer models perform well (and often best) at low-to-moderate depth, we hope to convince fellow researchers to devote less computational resources, as well as time, to exploring overly large models during the development of these systems.
no code implementations • 23 Oct 2019 • Felix McGregor, Arnu Pretorius, Johan du Preez, Steve Kroon
Bayesian neural networks (BNNs) have developed into useful tools for probabilistic modelling due to recent advances in variational inference enabling large scale BNNs.
no code implementations • 13 Oct 2019 • Arnu Pretorius, Elan van Biljon, Benjamin van Niekerk, Ryan Eloff, Matthew Reynard, Steve James, Benjamin Rosman, Herman Kamper, Steve Kroon
Our results therefore suggest that, in the shallow-to-moderate depth setting, critical initialisation provides zero performance gains when compared to off-critical initialisations and that searching for off-critical initialisations that might improve training speed or generalisation, is likely to be a fruitless endeavour.
no code implementations • 12 Oct 2019 • Arnu Pretorius, Herman Kamper, Steve Kroon
Recent work has established the equivalence between deep neural networks and Gaussian processes (GPs), resulting in so-called neural network Gaussian processes (NNGPs).
no code implementations • 16 Apr 2019 • Ryan Eloff, André Nortje, Benjamin van Niekerk, Avashna Govender, Leanne Nortje, Arnu Pretorius, Elan van Biljon, Ewald van der Westhuizen, Lisa van Staden, Herman Kamper
For our submission to the ZeroSpeech 2019 challenge, we apply discrete latent-variable neural networks to unlabelled speech and use the discovered units for speech synthesis.
1 code implementation • NeurIPS 2018 • Arnu Pretorius, Elan van Biljon, Steve Kroon, Herman Kamper
Simulations and experiments on real-world data confirm that our proposed initialisation is able to stably propagate signals in deep networks, while using an initialisation disregarding noise fails to do so.
1 code implementation • ICML 2018 • Arnu Pretorius, Steve Kroon, Herman Kamper
Here we develop theory for how noise influences learning in DAEs.