Search Results for author: Kale-ab Tessera

Found 6 papers, 2 papers with code

Generalisable Agents for Neural Network Optimisation

no code implementations30 Nov 2023 Kale-ab Tessera, Callum Rhys Tilbury, Sasha Abramowitz, Ruan de Kock, Omayma Mahjoub, Benjamin Rosman, Sara Hooker, Arnu Pretorius

Optimising deep neural networks is a challenging task due to complex training dynamics, high computational requirements, and long training times.

Multi-agent Reinforcement Learning Scheduling

Reduce, Reuse, Recycle: Selective Reincarnation in Multi-Agent Reinforcement Learning

1 code implementation31 Mar 2023 Claude Formanek, Callum Rhys Tilbury, Jonathan Shock, Kale-ab Tessera, Arnu Pretorius

'Reincarnation' in reinforcement learning has been proposed as a formalisation of reusing prior computation from past experiments when training an agent in an environment.

Multi-agent Reinforcement Learning reinforcement-learning

On pseudo-absence generation and machine learning for locust breeding ground prediction in Africa

1 code implementation6 Nov 2021 Ibrahim Salihu Yusuf, Kale-ab Tessera, Thomas Tumiel, Zohra Slim, Amine Kerkeni, Sella Nevo, Arnu Pretorius

In this paper, we compare this random sampling approach to more advanced pseudo-absence generation methods, such as environmental profiling and optimal background extent limitation, specifically for predicting desert locust breeding grounds in Africa.

regression

Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization

no code implementations2 Feb 2021 Kale-ab Tessera, Sara Hooker, Benjamin Rosman

Based upon these findings, we show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime.

Cannot find the paper you are looking for? You can Submit a new open access paper.