XCSF with Experience Replay for Automatic Test Case Prioritization

The verification of a new product is of major importance for companies. With the rise of test automation, companies start to rely on huge numbers of tests. Often, it is not feasible to run all available tests due to time constraints. Thus, a test suite of critical tests has to be determined. Recent research has shown that reinforcement learning is suitable for this prioritization problem: neural networks and XCS(F) learning classifier systems have been applied to this task. We extend the existing XCSF-based agent by incorporating experience replay (ER) in order to improve learning efficiency. In an experimental evaluation we show that this not only boosts performance but also enables our agent to exceed the aforementioned solutions. For XCSF without ER and neural networks, the most suitable reward function is strongly dependent on the underlying data set. This is in practice a downside as reward functions usually need to be chosen a priori in order to ensure the quality of the chosen test suite. However, for our improved agent this is not the case and we can give a clear recommendation for the reward function.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here