no code implementations • 8 Aug 2022 • Benjamin J Smith, Robert Klassert, Roland Pihlakas
Due to publishing rules, further work could not be presented in the preprint, but in the final published version, we will further compare SFELLA to the multi-objective reward exponentials (MORE) approach (Rolf, 2020), demonstrating that SFELLA performs similarly to MORE in a simple previously-described foraging task, but in a modified foraging environment with a new resource that was not depleted as the agent worked, SFELLA collected more of the new resource with very little cost incurred in terms of the old resource.