1 code implementation • 31 Mar 2023 • Claude Formanek, Callum Rhys Tilbury, Jonathan Shock, Kale-ab Tessera, Arnu Pretorius
'Reincarnation' in reinforcement learning has been proposed as a formalisation of reusing prior computation from past experiments when training an agent in an environment.
1 code implementation • 6 Nov 2021 • Ibrahim Salihu Yusuf, Kale-ab Tessera, Thomas Tumiel, Zohra Slim, Amine Kerkeni, Sella Nevo, Arnu Pretorius
In this paper, we compare this random sampling approach to more advanced pseudo-absence generation methods, such as environmental profiling and optimal background extent limitation, specifically for predicting desert locust breeding grounds in Africa.
no code implementations • 2 Feb 2021 • Kale-ab Tessera, Sara Hooker, Benjamin Rosman
Based upon these findings, we show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime.
no code implementations • 30 Nov 2023 • Kale-ab Tessera, Callum Rhys Tilbury, Sasha Abramowitz, Ruan de Kock, Omayma Mahjoub, Benjamin Rosman, Sara Hooker, Arnu Pretorius
Optimising deep neural networks is a challenging task due to complex training dynamics, high computational requirements, and long training times.
no code implementations • 13 Dec 2023 • Omayma Mahjoub, Ruan de Kock, Siddarth Singh, Wiem Khlifi, Abidine Vall, Kale-ab Tessera, Arnu Pretorius
Measuring the contribution of individual agents is challenging in cooperative multi-agent reinforcement learning (MARL).
no code implementations • 13 Dec 2023 • Siddarth Singh, Omayma Mahjoub, Ruan de Kock, Wiem Khlifi, Abidine Vall, Kale-ab Tessera, Arnu Pretorius
Establishing sound experimental standards and rigour is important in any growing field of research.