no code implementations • 8 Oct 2023 • Ana L. C. Bazzan, Anderson R. Tavares, André G. Pereira, Cláudio R. Jung, Jacob Scharcanski, Joel Luis Carbonera, Luís C. Lamb, Mariana Recamonde-Mendoza, Thiago L. T. da Silveira, Viviane Moreira
With no intent to exhaust the subject, we explore the AI applications that are redefining sectors of the economy, impacting society and humanity.
2 code implementations • Conference on Neural Information Processing Systems Datasets and Benchmarks Track 2023 • Florian Felten, Lucas N. Alegre, Ann Nowé, Ana L. C. Bazzan, El-Ghazali Talbi, Grégoire Danoy, Bruno C. da Silva
Multi-objective reinforcement learning algorithms (MORL) extend standard reinforcement learning (RL) to scenarios where agents must optimize multiple---potentially conflicting---objectives, each represented by a distinct reward function.
2 code implementations • 18 Jan 2023 • Lucas N. Alegre, Ana L. C. Bazzan, Diederik M. Roijers, Ann Nowé, Bruno C. da Silva
Finally, we introduce a bound that characterizes the maximum utility loss (with respect to the optimal solution) incurred by the partial solutions computed by our method throughout learning.
2 code implementations • Benelux Conference on Artificial Intelligence BNAIC/BeNeLearn 2022 • Lucas N. Alegre, Florian Felten, El-Ghazali Talbi, Grégoire Danoy, Ann Nowé, Ana L. C. Bazzan, Bruno C. da Silva
We introduce MO-Gym, an extensible library containing a diverse set of multi-objective reinforcement learning environments.
1 code implementation • 22 Jun 2022 • Lucas N. Alegre, Ana L. C. Bazzan, Bruno C. da Silva
If reward functions are expressed linearly, and the agent has previously learned a set of policies for different tasks, successor features (SFs) can be exploited to combine such policies and identify reasonable solutions for new problems.
no code implementations • 18 Mar 2022 • Ana L. C. Bazzan
As the demand for mobility in our society seems to increase, the various issues centered on urban mobility are among those that worry most city inhabitants in this planet.
1 code implementation • 20 May 2021 • Lucas N. Alegre, Ana L. C. Bazzan, Bruno C. da Silva
Non-stationary environments are challenging for reinforcement learning algorithms.
no code implementations • 15 Jun 2020 • Fernando Santos, Ingrid Nunes, Ana L. C. Bazzan
The agent-based modeling and simulation (ABMS) paradigm has been used to analyze, reproduce, and predict phenomena related to many application areas.
no code implementations • 9 Apr 2020 • Lucas N. Alegre, Ana L. C. Bazzan, Bruno C. da Silva
In this paper we analyze the effects that different sources of non-stationarity have in a network of traffic signals, in which each signal is modeled as a learning agent.
no code implementations • 22 Feb 2016 • Sandra D. Prado, Silvio R. Dahmen, Ana L. C. Bazzan, Padraig Mac Carron, Ralph Kenna
We study temporal networks of characters in literature focusing on "Alice's Adventures in Wonderland" (1865) by Lewis Carroll and the anonymous "La Chanson de Roland" (around 1100).