Search Results for author: Grégoire Danoy

Found 6 papers, 5 papers with code

Training Green AI Models Using Elite Samples

no code implementations19 Feb 2024 Mohammed Alswaitti, Roberto Verdecchia, Grégoire Danoy, Pascal Bouvry, Johnatan Pecero

The substantial increase in AI model training has considerable environmental implications, mandating more energy-efficient and sustainable AI practices.

Constraint Model for the Satellite Image Mosaic Selection Problem

1 code implementation7 Dec 2023 Manuel Combarro Simón, Pierre Talbot, Grégoire Danoy, Jedrzej Musial, Mohammed Alswaitti, Pascal Bouvry

More precisely, for this problem the input is an area of interest, several satellite images intersecting the area, a list of requirements relative to the image and the mosaic, such as cloud coverage percentage, image resolution, and a list of objectives to optimize.

Multi-Objective Reinforcement Learning Based on Decomposition: A Taxonomy and Framework

1 code implementation21 Nov 2023 Florian Felten, El-Ghazali Talbi, Grégoire Danoy

To tackle such an issue, this paper introduces multi-objective reinforcement learning based on decomposition (MORL/D), a novel methodology bridging the literature of RL and MOO.

Multi-Objective Reinforcement Learning reinforcement-learning

A Toolkit for Reliable Benchmarking and Research in Multi-Objective Reinforcement Learning

2 code implementations Conference on Neural Information Processing Systems Datasets and Benchmarks Track 2023 Florian Felten, Lucas N. Alegre, Ann Nowé, Ana L. C. Bazzan, El-Ghazali Talbi, Grégoire Danoy, Bruno C. da Silva

Multi-objective reinforcement learning algorithms (MORL) extend standard reinforcement learning (RL) to scenarios where agents must optimize multiple---potentially conflicting---objectives, each represented by a distinct reward function.

Benchmarking Multi-Objective Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.