Policy Evaluation

Kalman Optimization for Value Approximation

Introduced by Shashua et al. in Kalman meets Bellman: Improving Policy Evaluation through Value Tracking

Kalman Optimization for Value Approximation, or KOVA is a general framework for addressing uncertainties while approximating value-based functions in deep RL domains. KOVA minimizes a regularized objective function that concerns both parameter and noisy return uncertainties. It is feasible when using non-linear approximation functions as DNNs and can estimate the value in both on-policy and off-policy settings. It can be incorporated as a policy evaluation component in policy optimization algorithms.

Source: Kalman meets Bellman: Improving Policy Evaluation through Value Tracking

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Reinforcement Learning (RL) 1 100.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories