Policy Gradient Methods

Soft Actor-Critic (Autotuned Temperature)

Introduced by Haarnoja et al. in Soft Actor-Critic Algorithms and Applications

Soft Actor Critic (Autotuned Temperature is a modification of the SAC reinforcement learning algorithm. SAC can suffer from brittleness to the temperature hyperparameter. Unlike in conventional reinforcement learning, where the optimal policy is independent of scaling of the reward function, in maximum entropy reinforcement learning the scaling factor has to be compensated by the choice a of suitable temperature, and a sub-optimal temperature can drastically degrade performance. To resolve this issue, SAC with Autotuned Temperature has an automatic gradient-based temperature tuning method that adjusts the expected entropy over the visited states to match a target value.

Source: Soft Actor-Critic Algorithms and Applications


Paper Code Results Date Stars


Task Papers Share
Reinforcement Learning (RL) 4 50.00%
Offline RL 1 12.50%
Continuous Control 1 12.50%
Control with Prametrised Actions 1 12.50%
Decision Making 1 12.50%